blob: 9b18b5902249de51fdf6b7225f70e83c0620983d [file] [log] [blame]
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta charset="utf-8" />
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<meta http-equiv="x-ua-compatible" content="ie=edge">
<style>
.dropdown {
position: relative;
display: inline-block;
}
.dropdown-content {
display: none;
position: absolute;
background-color: #f9f9f9;
min-width: 160px;
box-shadow: 0px 8px 16px 0px rgba(0,0,0,0.2);
padding: 12px 16px;
z-index: 1;
text-align: left;
}
.dropdown:hover .dropdown-content {
display: block;
}
.dropdown-option:hover {
color: #FF4500;
}
.dropdown-option-active {
color: #FF4500;
font-weight: lighter;
}
.dropdown-option {
color: #000000;
font-weight: lighter;
}
.dropdown-header {
color: #FFFFFF;
display: inline-flex;
}
.dropdown-caret {
width: 18px;
height: 54px;
}
.dropdown-caret-path {
fill: #FFFFFF;
}
</style>
<title>mxnet.gluon.nn.conv_layers &#8212; Apache MXNet documentation</title>
<link rel="stylesheet" href="../../../../_static/basic.css" type="text/css" />
<link rel="stylesheet" href="../../../../_static/pygments.css" type="text/css" />
<link rel="stylesheet" type="text/css" href="../../../../_static/mxnet.css" />
<link rel="stylesheet" href="../../../../_static/material-design-lite-1.3.0/material.blue-deep_orange.min.css" type="text/css" />
<link rel="stylesheet" href="../../../../_static/sphinx_materialdesign_theme.css" type="text/css" />
<link rel="stylesheet" href="../../../../_static/fontawesome/all.css" type="text/css" />
<link rel="stylesheet" href="../../../../_static/fonts.css" type="text/css" />
<link rel="stylesheet" href="../../../../_static/feedback.css" type="text/css" />
<script id="documentation_options" data-url_root="../../../../" src="../../../../_static/documentation_options.js"></script>
<script src="../../../../_static/jquery.js"></script>
<script src="../../../../_static/underscore.js"></script>
<script src="../../../../_static/doctools.js"></script>
<script src="../../../../_static/language_data.js"></script>
<script src="../../../../_static/matomo_analytics.js"></script>
<script src="../../../../_static/autodoc.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script async="async" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.5/latest.js?config=TeX-AMS-MML_HTMLorMML"></script>
<script type="text/x-mathjax-config">MathJax.Hub.Config({"tex2jax": {"inlineMath": [["$", "$"], ["\\(", "\\)"]], "processEscapes": true, "ignoreClass": "document", "processClass": "math|output_area"}})</script>
<script src="../../../../_static/sphinx_materialdesign_theme.js"></script>
<link rel="shortcut icon" href="../../../../_static/mxnet-icon.png"/>
<link rel="index" title="Index" href="../../../../genindex.html" />
<link rel="search" title="Search" href="../../../../search.html" />
</head>
<body><header class="site-header" role="banner">
<div class="wrapper">
<a class="site-title" rel="author" href="/"><img
src="../../../../_static/mxnet_logo.png" class="site-header-logo"></a>
<nav class="site-nav">
<input type="checkbox" id="nav-trigger" class="nav-trigger"/>
<label for="nav-trigger">
<span class="menu-icon">
<svg viewBox="0 0 18 15" width="18px" height="15px">
<path d="M18,1.484c0,0.82-0.665,1.484-1.484,1.484H1.484C0.665,2.969,0,2.304,0,1.484l0,0C0,0.665,0.665,0,1.484,0 h15.032C17.335,0,18,0.665,18,1.484L18,1.484z M18,7.516C18,8.335,17.335,9,16.516,9H1.484C0.665,9,0,8.335,0,7.516l0,0 c0-0.82,0.665-1.484,1.484-1.484h15.032C17.335,6.031,18,6.696,18,7.516L18,7.516z M18,13.516C18,14.335,17.335,15,16.516,15H1.484 C0.665,15,0,14.335,0,13.516l0,0c0-0.82,0.665-1.483,1.484-1.483h15.032C17.335,12.031,18,12.695,18,13.516L18,13.516z"/>
</svg>
</span>
</label>
<div class="trigger">
<a class="page-link" href="/get_started">Get Started</a>
<a class="page-link" href="/features">Features</a>
<a class="page-link" href="/ecosystem">Ecosystem</a>
<a class="page-link page-current" href="/api">Docs & Tutorials</a>
<a class="page-link" href="/trusted_by">Trusted By</a>
<a class="page-link" href="https://github.com/apache/mxnet">GitHub</a>
<div class="dropdown" style="min-width:100px">
<span class="dropdown-header">Apache
<svg class="dropdown-caret" viewBox="0 0 32 32" class="icon icon-caret-bottom" aria-hidden="true"><path class="dropdown-caret-path" d="M24 11.305l-7.997 11.39L8 11.305z"></path></svg>
</span>
<div class="dropdown-content" style="min-width:250px">
<a href="https://www.apache.org/foundation/">Apache Software Foundation</a>
<a href="https://incubator.apache.org/">Apache Incubator</a>
<a href="https://www.apache.org/licenses/">License</a>
<a href="/versions/1.9.1/api/faq/security.html">Security</a>
<a href="https://privacy.apache.org/policies/privacy-policy-public.html">Privacy</a>
<a href="https://www.apache.org/events/current-event">Events</a>
<a href="https://www.apache.org/foundation/sponsorship.html">Sponsorship</a>
<a href="https://www.apache.org/foundation/thanks.html">Thanks</a>
</div>
</div>
<div class="dropdown">
<span class="dropdown-header">master
<svg class="dropdown-caret" viewBox="0 0 32 32" class="icon icon-caret-bottom" aria-hidden="true"><path class="dropdown-caret-path" d="M24 11.305l-7.997 11.39L8 11.305z"></path></svg>
</span>
<div class="dropdown-content">
<a class="dropdown-option-active" href="/versions/master/">master</a><br>
<a class="dropdown-option" href="/versions/1.9.1/">1.9.1</a><br>
<a class="dropdown-option" href="/versions/1.8.0/">1.8.0</a><br>
<a class="dropdown-option" href="/versions/1.7.0/">1.7.0</a><br>
<a class="dropdown-option" href="/versions/1.6.0/">1.6.0</a><br>
<a class="dropdown-option" href="/versions/1.5.0/">1.5.0</a><br>
<a class="dropdown-option" href="/versions/1.4.1/">1.4.1</a><br>
<a class="dropdown-option" href="/versions/1.3.1/">1.3.1</a><br>
<a class="dropdown-option" href="/versions/1.2.1/">1.2.1</a><br>
<a class="dropdown-option" href="/versions/1.1.0/">1.1.0</a><br>
<a class="dropdown-option" href="/versions/1.0.0/">1.0.0</a><br>
<a class="dropdown-option" href="/versions/0.12.1/">0.12.1</a><br>
<a class="dropdown-option" href="/versions/0.11.0/">0.11.0</a>
</div>
</div>
</div>
</nav>
</div>
</header>
<div class="mdl-layout mdl-js-layout mdl-layout--fixed-header mdl-layout--fixed-drawer"><header class="mdl-layout__header mdl-layout__header--waterfall ">
<div class="mdl-layout__header-row">
<nav class="mdl-navigation breadcrumb">
<a class="mdl-navigation__link" href="../../../index.html">Module code</a><i class="material-icons">navigate_next</i>
<a class="mdl-navigation__link is-active">mxnet.gluon.nn.conv_layers</a>
</nav>
<div class="mdl-layout-spacer"></div>
<nav class="mdl-navigation">
<form class="form-inline pull-sm-right" action="../../../../search.html" method="get">
<div class="mdl-textfield mdl-js-textfield mdl-textfield--expandable mdl-textfield--floating-label mdl-textfield--align-right">
<label id="quick-search-icon" class="mdl-button mdl-js-button mdl-button--icon" for="waterfall-exp">
<i class="material-icons">search</i>
</label>
<div class="mdl-textfield__expandable-holder">
<input class="mdl-textfield__input" type="text" name="q" id="waterfall-exp" placeholder="Search" />
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</div>
</div>
<div class="mdl-tooltip" data-mdl-for="quick-search-icon">
Quick search
</div>
</form>
<a id="button-show-github"
href="https://github.com/apache/mxnet/edit/master/docs/python_docs/python/_modules/mxnet/gluon/nn/conv_layers" class="mdl-button mdl-js-button mdl-button--icon">
<i class="material-icons">edit</i>
</a>
<div class="mdl-tooltip" data-mdl-for="button-show-github">
Edit on Github
</div>
</nav>
</div>
<div class="mdl-layout__header-row header-links">
<div class="mdl-layout-spacer"></div>
<nav class="mdl-navigation">
</nav>
</div>
</header><header class="mdl-layout__drawer">
<div class="globaltoc">
<span class="mdl-layout-title toc">Table Of Contents</span>
<nav class="mdl-navigation">
<ul>
<li class="toctree-l1"><a class="reference internal" href="../../../../tutorials/index.html">Python Tutorials</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../../../tutorials/getting-started/index.html">Getting Started</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../../../tutorials/getting-started/crash-course/index.html">Crash Course</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/getting-started/crash-course/0-introduction.html">Introduction</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/getting-started/crash-course/1-nparray.html">Step 1: Manipulate data with NP on MXNet</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/getting-started/crash-course/2-create-nn.html">Step 2: Create a neural network</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/getting-started/crash-course/3-autograd.html">Step 3: Automatic differentiation with autograd</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/getting-started/crash-course/4-components.html">Step 4: Necessary components that are not in the network</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/getting-started/crash-course/5-datasets.html">Step 5: <code class="docutils literal notranslate"><span class="pre">Dataset</span></code>s and <code class="docutils literal notranslate"><span class="pre">DataLoader</span></code></a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/getting-started/crash-course/5-datasets.html#Using-own-data-with-included-Datasets">Using own data with included <code class="docutils literal notranslate"><span class="pre">Dataset</span></code>s</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/getting-started/crash-course/5-datasets.html#Using-your-own-data-with-custom-Datasets">Using your own data with custom <code class="docutils literal notranslate"><span class="pre">Dataset</span></code>s</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/getting-started/crash-course/5-datasets.html#New-in-MXNet-2.0:-faster-C++-backend-dataloaders">New in MXNet 2.0: faster C++ backend dataloaders</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/getting-started/crash-course/6-train-nn.html">Step 6: Train a Neural Network</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/getting-started/crash-course/7-use-gpus.html">Step 7: Load and Run a NN using GPU</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../../tutorials/getting-started/to-mxnet/index.html">Moving to MXNet from Other Frameworks</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/getting-started/to-mxnet/pytorch.html">PyTorch vs Apache MXNet</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../../tutorials/getting-started/gluon_from_experiment_to_deployment.html">Gluon: from experiment to deployment</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../tutorials/getting-started/gluon_migration_guide.html">Gluon2.0: Migration Guide</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../tutorials/getting-started/logistic_regression_explained.html">Logistic regression explained</a></li>
<li class="toctree-l3"><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/image/mnist.html">MNIST</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../../../tutorials/packages/index.html">Packages</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../../../tutorials/packages/autograd/index.html">Automatic Differentiation</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../tutorials/packages/gluon/index.html">Gluon</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/packages/gluon/blocks/index.html">Blocks</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/blocks/custom-layer.html">Custom Layers</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/blocks/hybridize.html">Hybridize</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/blocks/init.html">Initialization</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/blocks/naming.html">Parameter and Block Naming</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/blocks/nn.html">Layers and Blocks</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/blocks/parameters.html">Parameter Management</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/blocks/activations/activations.html">Activation Blocks</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/packages/gluon/data/index.html">Data Tutorials</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/data/data_augmentation.html">Image Augmentation</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/data/datasets.html">Gluon <code class="docutils literal notranslate"><span class="pre">Dataset</span></code>s and <code class="docutils literal notranslate"><span class="pre">DataLoader</span></code></a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/data/datasets.html#Using-own-data-with-included-Datasets">Using own data with included <code class="docutils literal notranslate"><span class="pre">Dataset</span></code>s</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/data/datasets.html#Using-own-data-with-custom-Datasets">Using own data with custom <code class="docutils literal notranslate"><span class="pre">Dataset</span></code>s</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/data/datasets.html#Appendix:-Upgrading-from-Module-DataIter-to-Gluon-DataLoader">Appendix: Upgrading from Module <code class="docutils literal notranslate"><span class="pre">DataIter</span></code> to Gluon <code class="docutils literal notranslate"><span class="pre">DataLoader</span></code></a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/packages/gluon/image/index.html">Image Tutorials</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/image/info_gan.html">Image similarity search with InfoGAN</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/image/mnist.html">Handwritten Digit Recognition</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/packages/gluon/loss/index.html">Losses</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/loss/custom-loss.html">Custom Loss Blocks</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/loss/kl_divergence.html">Kullback-Leibler (KL) Divergence</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/loss/loss.html">Loss functions</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/packages/gluon/text/index.html">Text Tutorials</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/text/gnmt.html">Google Neural Machine Translation</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/text/transformer.html">Machine Translation with Transformer</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/packages/gluon/training/index.html">Training</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/training/fit_api_tutorial.html">MXNet Gluon Fit API</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/training/trainer.html">Trainer</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/training/learning_rates/index.html">Learning Rates</a><ul>
<li class="toctree-l6"><a class="reference internal" href="../../../../tutorials/packages/gluon/training/learning_rates/learning_rate_finder.html">Learning Rate Finder</a></li>
<li class="toctree-l6"><a class="reference internal" href="../../../../tutorials/packages/gluon/training/learning_rates/learning_rate_schedules.html">Learning Rate Schedules</a></li>
<li class="toctree-l6"><a class="reference internal" href="../../../../tutorials/packages/gluon/training/learning_rates/learning_rate_schedules_advanced.html">Advanced Learning Rate Schedules</a></li>
</ul>
</li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/training/normalization/index.html">Normalization Blocks</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../../tutorials/packages/kvstore/index.html">KVStore</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/packages/kvstore/kvstore.html">Distributed Key-Value Store</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../../tutorials/packages/legacy/index.html">Legacy</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/packages/legacy/ndarray/index.html">NDArray</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/legacy/ndarray/01-ndarray-intro.html">An Intro: Manipulate Data the MXNet Way with NDArray</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/legacy/ndarray/02-ndarray-operations.html">NDArray Operations</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/legacy/ndarray/03-ndarray-contexts.html">NDArray Contexts</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/legacy/ndarray/gotchas_numpy_in_mxnet.html">Gotchas using NumPy in Apache MXNet</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/legacy/ndarray/sparse/index.html">Tutorials</a><ul>
<li class="toctree-l6"><a class="reference internal" href="../../../../tutorials/packages/legacy/ndarray/sparse/csr.html">CSRNDArray - NDArray in Compressed Sparse Row Storage Format</a></li>
<li class="toctree-l6"><a class="reference internal" href="../../../../tutorials/packages/legacy/ndarray/sparse/row_sparse.html">RowSparseNDArray - NDArray for Sparse Gradient Updates</a></li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../../tutorials/packages/np/index.html">What is NP on MXNet</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/packages/np/cheat-sheet.html">The NP on MXNet cheat sheet</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/packages/np/np-vs-numpy.html">Differences between NP on MXNet and NumPy</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../../tutorials/packages/onnx/index.html">ONNX</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/packages/onnx/fine_tuning_gluon.html">Fine-tuning an ONNX model</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/packages/onnx/inference_on_onnx_model.html">Running inference on MXNet/Gluon from an ONNX model</a></li>
<li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/deploy/export/onnx.html">Export ONNX Models</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../../tutorials/packages/optimizer/index.html">Optimizers</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../tutorials/packages/viz/index.html">Visualization</a><ul>
<li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/faq/visualize_graph">Visualize networks</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../../../tutorials/performance/index.html">Performance</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../../../tutorials/performance/compression/index.html">Compression</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/performance/compression/int8.html">Deploy with int-8</a></li>
<li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/faq/float16">Float16</a></li>
<li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/faq/gradient_compression">Gradient Compression</a></li>
<li class="toctree-l4"><a class="reference external" href="https://gluon-cv.mxnet.io/build/examples_deployment/int8_inference.html">GluonCV with Quantized Models</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../../tutorials/performance/backend/index.html">Accelerated Backend Tools</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/performance/backend/dnnl/index.html">oneDNN</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/performance/backend/dnnl/dnnl_readme.html">Install MXNet with oneDNN</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/performance/backend/dnnl/dnnl_quantization.html">oneDNN Quantization</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/performance/backend/dnnl/dnnl_quantization_inc.html">Improving accuracy with Intel® Neural Compressor</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/performance/backend/tvm.html">Use TVM</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/performance/backend/profiler.html">Profiling MXNet Models</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/performance/backend/amp.html">Using AMP: Automatic Mixed Precision</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../../../tutorials/deploy/index.html">Deployment</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../../../tutorials/deploy/export/index.html">Export</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/deploy/export/onnx.html">Exporting to ONNX format</a></li>
<li class="toctree-l4"><a class="reference external" href="https://gluon-cv.mxnet.io/build/examples_deployment/export_network.html">Export Gluon CV Models</a></li>
<li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Save / Load Parameters</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../../tutorials/deploy/inference/index.html">Inference</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/deploy/inference/cpp.html">Deploy into C++</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/deploy/inference/image_classification_jetson.html">Image Classication using pretrained ResNet-50 model on Jetson module</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../../tutorials/deploy/run-on-aws/index.html">Run on AWS</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/deploy/run-on-aws/use_ec2.html">Run on an EC2 Instance</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/deploy/run-on-aws/use_sagemaker.html">Run on Amazon SageMaker</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/deploy/run-on-aws/cloud.html">MXNet on the Cloud</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../../../tutorials/extend/index.html">Extend</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../../../tutorials/extend/customop.html">Custom Numpy Operators</a></li>
<li class="toctree-l3"><a class="reference external" href="https://mxnet.apache.org/api/faq/new_op">New Operator Creation</a></li>
<li class="toctree-l3"><a class="reference external" href="https://mxnet.apache.org/api/faq/add_op_in_backend">New Operator in MXNet Backend</a></li>
<li class="toctree-l3"><a class="reference external" href="https://mxnet.apache.org/api/faq/using_rtc">Using RTC for CUDA kernels</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../../../api/index.html">Python API</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/np/index.html">mxnet.np</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/np/arrays.html">Array objects</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/np/arrays.ndarray.html">The N-dimensional array (<code class="xref py py-class docutils literal notranslate"><span class="pre">ndarray</span></code>)</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/np/arrays.indexing.html">Indexing</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/np/routines.html">Routines</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/np/routines.array-creation.html">Array creation routines</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.eye.html">mxnet.np.eye</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.empty.html">mxnet.np.empty</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.full.html">mxnet.np.full</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.identity.html">mxnet.np.identity</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.ones.html">mxnet.np.ones</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.ones_like.html">mxnet.np.ones_like</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.zeros.html">mxnet.np.zeros</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.zeros_like.html">mxnet.np.zeros_like</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.array.html">mxnet.np.array</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.copy.html">mxnet.np.copy</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.arange.html">mxnet.np.arange</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linspace.html">mxnet.np.linspace</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.logspace.html">mxnet.np.logspace</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.meshgrid.html">mxnet.np.meshgrid</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.tril.html">mxnet.np.tril</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/np/routines.array-manipulation.html">Array manipulation routines</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.reshape.html">mxnet.np.reshape</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.ravel.html">mxnet.np.ravel</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.ndarray.flatten.html">mxnet.np.ndarray.flatten</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.swapaxes.html">mxnet.np.swapaxes</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.ndarray.T.html">mxnet.np.ndarray.T</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.transpose.html">mxnet.np.transpose</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.moveaxis.html">mxnet.np.moveaxis</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.rollaxis.html">mxnet.np.rollaxis</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.expand_dims.html">mxnet.np.expand_dims</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.squeeze.html">mxnet.np.squeeze</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.broadcast_to.html">mxnet.np.broadcast_to</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.broadcast_arrays.html">mxnet.np.broadcast_arrays</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.atleast_1d.html">mxnet.np.atleast_1d</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.atleast_2d.html">mxnet.np.atleast_2d</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.atleast_3d.html">mxnet.np.atleast_3d</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.concatenate.html">mxnet.np.concatenate</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.stack.html">mxnet.np.stack</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.dstack.html">mxnet.np.dstack</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.vstack.html">mxnet.np.vstack</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.column_stack.html">mxnet.np.column_stack</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.hstack.html">mxnet.np.hstack</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.split.html">mxnet.np.split</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.hsplit.html">mxnet.np.hsplit</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.vsplit.html">mxnet.np.vsplit</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.array_split.html">mxnet.np.array_split</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.dsplit.html">mxnet.np.dsplit</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.tile.html">mxnet.np.tile</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.repeat.html">mxnet.np.repeat</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.unique.html">mxnet.np.unique</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.delete.html">mxnet.np.delete</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.insert.html">mxnet.np.insert</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.append.html">mxnet.np.append</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.resize.html">mxnet.np.resize</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.trim_zeros.html">mxnet.np.trim_zeros</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.reshape.html">mxnet.np.reshape</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.flip.html">mxnet.np.flip</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.roll.html">mxnet.np.roll</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.rot90.html">mxnet.np.rot90</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.fliplr.html">mxnet.np.fliplr</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.flipud.html">mxnet.np.flipud</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/np/routines.io.html">Input and output</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.genfromtxt.html">mxnet.np.genfromtxt</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.ndarray.tolist.html">mxnet.np.ndarray.tolist</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.set_printoptions.html">mxnet.np.set_printoptions</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/np/routines.linalg.html">Linear algebra (<code class="xref py py-mod docutils literal notranslate"><span class="pre">numpy.linalg</span></code>)</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.dot.html">mxnet.np.dot</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.vdot.html">mxnet.np.vdot</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.inner.html">mxnet.np.inner</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.outer.html">mxnet.np.outer</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.tensordot.html">mxnet.np.tensordot</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.einsum.html">mxnet.np.einsum</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.multi_dot.html">mxnet.np.linalg.multi_dot</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.matmul.html">mxnet.np.matmul</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.matrix_power.html">mxnet.np.linalg.matrix_power</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.kron.html">mxnet.np.kron</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.svd.html">mxnet.np.linalg.svd</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.cholesky.html">mxnet.np.linalg.cholesky</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.qr.html">mxnet.np.linalg.qr</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.eig.html">mxnet.np.linalg.eig</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.eigh.html">mxnet.np.linalg.eigh</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.eigvals.html">mxnet.np.linalg.eigvals</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.eigvalsh.html">mxnet.np.linalg.eigvalsh</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.norm.html">mxnet.np.linalg.norm</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.trace.html">mxnet.np.trace</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.cond.html">mxnet.np.linalg.cond</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.det.html">mxnet.np.linalg.det</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.matrix_rank.html">mxnet.np.linalg.matrix_rank</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.slogdet.html">mxnet.np.linalg.slogdet</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.solve.html">mxnet.np.linalg.solve</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.tensorsolve.html">mxnet.np.linalg.tensorsolve</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.lstsq.html">mxnet.np.linalg.lstsq</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.inv.html">mxnet.np.linalg.inv</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.pinv.html">mxnet.np.linalg.pinv</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.tensorinv.html">mxnet.np.linalg.tensorinv</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/np/routines.math.html">Mathematical functions</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.sin.html">mxnet.np.sin</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.cos.html">mxnet.np.cos</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.tan.html">mxnet.np.tan</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.arcsin.html">mxnet.np.arcsin</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.arccos.html">mxnet.np.arccos</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.arctan.html">mxnet.np.arctan</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.degrees.html">mxnet.np.degrees</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.radians.html">mxnet.np.radians</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.hypot.html">mxnet.np.hypot</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.arctan2.html">mxnet.np.arctan2</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.deg2rad.html">mxnet.np.deg2rad</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.rad2deg.html">mxnet.np.rad2deg</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.unwrap.html">mxnet.np.unwrap</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.sinh.html">mxnet.np.sinh</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.cosh.html">mxnet.np.cosh</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.tanh.html">mxnet.np.tanh</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.arcsinh.html">mxnet.np.arcsinh</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.arccosh.html">mxnet.np.arccosh</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.arctanh.html">mxnet.np.arctanh</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.rint.html">mxnet.np.rint</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.fix.html">mxnet.np.fix</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.floor.html">mxnet.np.floor</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.ceil.html">mxnet.np.ceil</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.trunc.html">mxnet.np.trunc</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.around.html">mxnet.np.around</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.round_.html">mxnet.np.round_</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.sum.html">mxnet.np.sum</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.prod.html">mxnet.np.prod</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.cumsum.html">mxnet.np.cumsum</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.nanprod.html">mxnet.np.nanprod</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.nansum.html">mxnet.np.nansum</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.cumprod.html">mxnet.np.cumprod</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.nancumprod.html">mxnet.np.nancumprod</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.nancumsum.html">mxnet.np.nancumsum</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.diff.html">mxnet.np.diff</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.ediff1d.html">mxnet.np.ediff1d</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.cross.html">mxnet.np.cross</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.trapz.html">mxnet.np.trapz</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.exp.html">mxnet.np.exp</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.expm1.html">mxnet.np.expm1</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.log.html">mxnet.np.log</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.log10.html">mxnet.np.log10</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.log2.html">mxnet.np.log2</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.log1p.html">mxnet.np.log1p</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.logaddexp.html">mxnet.np.logaddexp</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.i0.html">mxnet.np.i0</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.ldexp.html">mxnet.np.ldexp</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.signbit.html">mxnet.np.signbit</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.copysign.html">mxnet.np.copysign</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.frexp.html">mxnet.np.frexp</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.spacing.html">mxnet.np.spacing</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.lcm.html">mxnet.np.lcm</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.gcd.html">mxnet.np.gcd</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.add.html">mxnet.np.add</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.reciprocal.html">mxnet.np.reciprocal</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.negative.html">mxnet.np.negative</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.divide.html">mxnet.np.divide</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.power.html">mxnet.np.power</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.subtract.html">mxnet.np.subtract</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.mod.html">mxnet.np.mod</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.multiply.html">mxnet.np.multiply</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.true_divide.html">mxnet.np.true_divide</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.remainder.html">mxnet.np.remainder</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.positive.html">mxnet.np.positive</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.float_power.html">mxnet.np.float_power</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.fmod.html">mxnet.np.fmod</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.modf.html">mxnet.np.modf</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.divmod.html">mxnet.np.divmod</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.floor_divide.html">mxnet.np.floor_divide</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.clip.html">mxnet.np.clip</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.sqrt.html">mxnet.np.sqrt</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.cbrt.html">mxnet.np.cbrt</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.square.html">mxnet.np.square</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.absolute.html">mxnet.np.absolute</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.sign.html">mxnet.np.sign</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.maximum.html">mxnet.np.maximum</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.minimum.html">mxnet.np.minimum</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.fabs.html">mxnet.np.fabs</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.heaviside.html">mxnet.np.heaviside</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.fmax.html">mxnet.np.fmax</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.fmin.html">mxnet.np.fmin</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.nan_to_num.html">mxnet.np.nan_to_num</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.interp.html">mxnet.np.interp</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/np/random/index.html">np.random</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.choice.html">mxnet.np.random.choice</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.shuffle.html">mxnet.np.random.shuffle</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.normal.html">mxnet.np.random.normal</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.uniform.html">mxnet.np.random.uniform</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.rand.html">mxnet.np.random.rand</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.randint.html">mxnet.np.random.randint</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.beta.html">mxnet.np.random.beta</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.chisquare.html">mxnet.np.random.chisquare</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.exponential.html">mxnet.np.random.exponential</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.f.html">mxnet.np.random.f</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.gamma.html">mxnet.np.random.gamma</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.gumbel.html">mxnet.np.random.gumbel</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.laplace.html">mxnet.np.random.laplace</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.logistic.html">mxnet.np.random.logistic</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.lognormal.html">mxnet.np.random.lognormal</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.multinomial.html">mxnet.np.random.multinomial</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.multivariate_normal.html">mxnet.np.random.multivariate_normal</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.pareto.html">mxnet.np.random.pareto</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.power.html">mxnet.np.random.power</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.rayleigh.html">mxnet.np.random.rayleigh</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.weibull.html">mxnet.np.random.weibull</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/np/routines.sort.html">Sorting, searching, and counting</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.ndarray.sort.html">mxnet.np.ndarray.sort</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.sort.html">mxnet.np.sort</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.lexsort.html">mxnet.np.lexsort</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.argsort.html">mxnet.np.argsort</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.msort.html">mxnet.np.msort</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.partition.html">mxnet.np.partition</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.argpartition.html">mxnet.np.argpartition</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.argmax.html">mxnet.np.argmax</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.argmin.html">mxnet.np.argmin</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.nanargmax.html">mxnet.np.nanargmax</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.nanargmin.html">mxnet.np.nanargmin</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.argwhere.html">mxnet.np.argwhere</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.nonzero.html">mxnet.np.nonzero</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.flatnonzero.html">mxnet.np.flatnonzero</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.where.html">mxnet.np.where</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.searchsorted.html">mxnet.np.searchsorted</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.extract.html">mxnet.np.extract</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.count_nonzero.html">mxnet.np.count_nonzero</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/np/routines.statistics.html">Statistics</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.min.html">mxnet.np.min</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.max.html">mxnet.np.max</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.amin.html">mxnet.np.amin</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.amax.html">mxnet.np.amax</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.nanmin.html">mxnet.np.nanmin</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.nanmax.html">mxnet.np.nanmax</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.ptp.html">mxnet.np.ptp</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.percentile.html">mxnet.np.percentile</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.nanpercentile.html">mxnet.np.nanpercentile</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.quantile.html">mxnet.np.quantile</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.nanquantile.html">mxnet.np.nanquantile</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.mean.html">mxnet.np.mean</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.std.html">mxnet.np.std</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.var.html">mxnet.np.var</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.median.html">mxnet.np.median</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.average.html">mxnet.np.average</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.nanmedian.html">mxnet.np.nanmedian</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.nanstd.html">mxnet.np.nanstd</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.nanvar.html">mxnet.np.nanvar</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.corrcoef.html">mxnet.np.corrcoef</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.correlate.html">mxnet.np.correlate</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.cov.html">mxnet.np.cov</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.histogram.html">mxnet.np.histogram</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.histogram2d.html">mxnet.np.histogram2d</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.histogramdd.html">mxnet.np.histogramdd</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.bincount.html">mxnet.np.bincount</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.histogram_bin_edges.html">mxnet.np.histogram_bin_edges</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.digitize.html">mxnet.np.digitize</a></li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/npx/index.html">NPX: NumPy Neural Network Extension</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.set_np.html">mxnet.npx.set_np</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.reset_np.html">mxnet.npx.reset_np</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.cpu.html">mxnet.npx.cpu</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.cpu_pinned.html">mxnet.npx.cpu_pinned</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.gpu.html">mxnet.npx.gpu</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.gpu_memory_info.html">mxnet.npx.gpu_memory_info</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.current_device.html">mxnet.npx.current_device</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.num_gpus.html">mxnet.npx.num_gpus</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.activation.html">mxnet.npx.activation</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.batch_norm.html">mxnet.npx.batch_norm</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.convolution.html">mxnet.npx.convolution</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.dropout.html">mxnet.npx.dropout</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.embedding.html">mxnet.npx.embedding</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.fully_connected.html">mxnet.npx.fully_connected</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.layer_norm.html">mxnet.npx.layer_norm</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.pooling.html">mxnet.npx.pooling</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.rnn.html">mxnet.npx.rnn</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.leaky_relu.html">mxnet.npx.leaky_relu</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.multibox_detection.html">mxnet.npx.multibox_detection</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.multibox_prior.html">mxnet.npx.multibox_prior</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.multibox_target.html">mxnet.npx.multibox_target</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.roi_pooling.html">mxnet.npx.roi_pooling</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.sigmoid.html">mxnet.npx.sigmoid</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.relu.html">mxnet.npx.relu</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.smooth_l1.html">mxnet.npx.smooth_l1</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.softmax.html">mxnet.npx.softmax</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.log_softmax.html">mxnet.npx.log_softmax</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.topk.html">mxnet.npx.topk</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.waitall.html">mxnet.npx.waitall</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.load.html">mxnet.npx.load</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.save.html">mxnet.npx.save</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.one_hot.html">mxnet.npx.one_hot</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.pick.html">mxnet.npx.pick</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.reshape_like.html">mxnet.npx.reshape_like</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.batch_flatten.html">mxnet.npx.batch_flatten</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.batch_dot.html">mxnet.npx.batch_dot</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.gamma.html">mxnet.npx.gamma</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.sequence_mask.html">mxnet.npx.sequence_mask</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/gluon/index.html">mxnet.gluon</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/gluon/block.html">gluon.Block</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/gluon/hybrid_block.html">gluon.HybridBlock</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/gluon/symbol_block.html">gluon.SymbolBlock</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/gluon/constant.html">gluon.Constant</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/gluon/parameter.html">gluon.Parameter</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/gluon/trainer.html">gluon.Trainer</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/gluon/contrib/index.html">gluon.contrib</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/gluon/data/index.html">gluon.data</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/gluon/data/vision/index.html">data.vision</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/gluon/data/vision/datasets/index.html">vision.datasets</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/gluon/data/vision/transforms/index.html">vision.transforms</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/gluon/loss/index.html">gluon.loss</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/gluon/metric/index.html">gluon.metric</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/gluon/model_zoo/index.html">gluon.model_zoo.vision</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/gluon/nn/index.html">gluon.nn</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/gluon/rnn/index.html">gluon.rnn</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/gluon/utils/index.html">gluon.utils</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/autograd/index.html">mxnet.autograd</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/initializer/index.html">mxnet.initializer</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/optimizer/index.html">mxnet.optimizer</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/lr_scheduler/index.html">mxnet.lr_scheduler</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/kvstore/index.html">KVStore: Communication for Distributed Training</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/kvstore/index.html#horovod">Horovod</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/kvstore/generated/mxnet.kvstore.Horovod.html">mxnet.kvstore.Horovod</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/kvstore/index.html#byteps">BytePS</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/kvstore/generated/mxnet.kvstore.BytePS.html">mxnet.kvstore.BytePS</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/kvstore/index.html#kvstore-interface">KVStore Interface</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/kvstore/generated/mxnet.kvstore.KVStore.html">mxnet.kvstore.KVStore</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/kvstore/generated/mxnet.kvstore.KVStoreBase.html">mxnet.kvstore.KVStoreBase</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/kvstore/generated/mxnet.kvstore.KVStoreServer.html">mxnet.kvstore.KVStoreServer</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/contrib/index.html">mxnet.contrib</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/contrib/io/index.html">contrib.io</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/contrib/ndarray/index.html">contrib.ndarray</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/contrib/onnx/index.html">contrib.onnx</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/contrib/quantization/index.html">contrib.quantization</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/contrib/symbol/index.html">contrib.symbol</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/contrib/tensorboard/index.html">contrib.tensorboard</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/contrib/tensorrt/index.html">contrib.tensorrt</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/contrib/text/index.html">contrib.text</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/legacy/index.html">Legacy</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/legacy/callback/index.html">mxnet.callback</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/legacy/image/index.html">mxnet.image</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/legacy/io/index.html">mxnet.io</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/legacy/ndarray/index.html">mxnet.ndarray</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/legacy/ndarray/ndarray.html">ndarray</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/legacy/ndarray/contrib/index.html">ndarray.contrib</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/legacy/ndarray/image/index.html">ndarray.image</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/legacy/ndarray/linalg/index.html">ndarray.linalg</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/legacy/ndarray/op/index.html">ndarray.op</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/legacy/ndarray/random/index.html">ndarray.random</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/legacy/ndarray/register/index.html">ndarray.register</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/legacy/ndarray/sparse/index.html">ndarray.sparse</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/legacy/ndarray/utils/index.html">ndarray.utils</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/legacy/recordio/index.html">mxnet.recordio</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/legacy/symbol/index.html">mxnet.symbol</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/legacy/symbol/symbol.html">symbol</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/legacy/symbol/contrib/index.html">symbol.contrib</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/legacy/symbol/image/index.html">symbol.image</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/legacy/symbol/linalg/index.html">symbol.linalg</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/legacy/symbol/op/index.html">symbol.op</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/legacy/symbol/random/index.html">symbol.random</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/legacy/symbol/register/index.html">symbol.register</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/legacy/symbol/sparse/index.html">symbol.sparse</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/legacy/visualization/index.html">mxnet.visualization</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/device/index.html">mxnet.device</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/engine/index.html">mxnet.engine</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/executor/index.html">mxnet.executor</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/kvstore_server/index.html">mxnet.kvstore_server</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/profiler/index.html">mxnet.profiler</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/rtc/index.html">mxnet.rtc</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/runtime/index.html">mxnet.runtime</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/runtime/generated/mxnet.runtime.Feature.html">mxnet.runtime.Feature</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/runtime/generated/mxnet.runtime.Features.html">mxnet.runtime.Features</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/runtime/generated/mxnet.runtime.feature_list.html">mxnet.runtime.feature_list</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/test_utils/index.html">mxnet.test_utils</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/util/index.html">mxnet.util</a></li>
</ul>
</li>
</ul>
</nav>
</div>
</header>
<main class="mdl-layout__content" tabIndex="0">
<header class="mdl-layout__drawer">
<div class="globaltoc">
<span class="mdl-layout-title toc">Table Of Contents</span>
<nav class="mdl-navigation">
<ul>
<li class="toctree-l1"><a class="reference internal" href="../../../../tutorials/index.html">Python Tutorials</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../../../tutorials/getting-started/index.html">Getting Started</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../../../tutorials/getting-started/crash-course/index.html">Crash Course</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/getting-started/crash-course/0-introduction.html">Introduction</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/getting-started/crash-course/1-nparray.html">Step 1: Manipulate data with NP on MXNet</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/getting-started/crash-course/2-create-nn.html">Step 2: Create a neural network</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/getting-started/crash-course/3-autograd.html">Step 3: Automatic differentiation with autograd</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/getting-started/crash-course/4-components.html">Step 4: Necessary components that are not in the network</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/getting-started/crash-course/5-datasets.html">Step 5: <code class="docutils literal notranslate"><span class="pre">Dataset</span></code>s and <code class="docutils literal notranslate"><span class="pre">DataLoader</span></code></a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/getting-started/crash-course/5-datasets.html#Using-own-data-with-included-Datasets">Using own data with included <code class="docutils literal notranslate"><span class="pre">Dataset</span></code>s</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/getting-started/crash-course/5-datasets.html#Using-your-own-data-with-custom-Datasets">Using your own data with custom <code class="docutils literal notranslate"><span class="pre">Dataset</span></code>s</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/getting-started/crash-course/5-datasets.html#New-in-MXNet-2.0:-faster-C++-backend-dataloaders">New in MXNet 2.0: faster C++ backend dataloaders</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/getting-started/crash-course/6-train-nn.html">Step 6: Train a Neural Network</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/getting-started/crash-course/7-use-gpus.html">Step 7: Load and Run a NN using GPU</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../../tutorials/getting-started/to-mxnet/index.html">Moving to MXNet from Other Frameworks</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/getting-started/to-mxnet/pytorch.html">PyTorch vs Apache MXNet</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../../tutorials/getting-started/gluon_from_experiment_to_deployment.html">Gluon: from experiment to deployment</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../tutorials/getting-started/gluon_migration_guide.html">Gluon2.0: Migration Guide</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../tutorials/getting-started/logistic_regression_explained.html">Logistic regression explained</a></li>
<li class="toctree-l3"><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/image/mnist.html">MNIST</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../../../tutorials/packages/index.html">Packages</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../../../tutorials/packages/autograd/index.html">Automatic Differentiation</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../tutorials/packages/gluon/index.html">Gluon</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/packages/gluon/blocks/index.html">Blocks</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/blocks/custom-layer.html">Custom Layers</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/blocks/hybridize.html">Hybridize</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/blocks/init.html">Initialization</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/blocks/naming.html">Parameter and Block Naming</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/blocks/nn.html">Layers and Blocks</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/blocks/parameters.html">Parameter Management</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/blocks/activations/activations.html">Activation Blocks</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/packages/gluon/data/index.html">Data Tutorials</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/data/data_augmentation.html">Image Augmentation</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/data/datasets.html">Gluon <code class="docutils literal notranslate"><span class="pre">Dataset</span></code>s and <code class="docutils literal notranslate"><span class="pre">DataLoader</span></code></a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/data/datasets.html#Using-own-data-with-included-Datasets">Using own data with included <code class="docutils literal notranslate"><span class="pre">Dataset</span></code>s</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/data/datasets.html#Using-own-data-with-custom-Datasets">Using own data with custom <code class="docutils literal notranslate"><span class="pre">Dataset</span></code>s</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/data/datasets.html#Appendix:-Upgrading-from-Module-DataIter-to-Gluon-DataLoader">Appendix: Upgrading from Module <code class="docutils literal notranslate"><span class="pre">DataIter</span></code> to Gluon <code class="docutils literal notranslate"><span class="pre">DataLoader</span></code></a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/packages/gluon/image/index.html">Image Tutorials</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/image/info_gan.html">Image similarity search with InfoGAN</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/image/mnist.html">Handwritten Digit Recognition</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/packages/gluon/loss/index.html">Losses</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/loss/custom-loss.html">Custom Loss Blocks</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/loss/kl_divergence.html">Kullback-Leibler (KL) Divergence</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/loss/loss.html">Loss functions</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/packages/gluon/text/index.html">Text Tutorials</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/text/gnmt.html">Google Neural Machine Translation</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/text/transformer.html">Machine Translation with Transformer</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/packages/gluon/training/index.html">Training</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/training/fit_api_tutorial.html">MXNet Gluon Fit API</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/training/trainer.html">Trainer</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/training/learning_rates/index.html">Learning Rates</a><ul>
<li class="toctree-l6"><a class="reference internal" href="../../../../tutorials/packages/gluon/training/learning_rates/learning_rate_finder.html">Learning Rate Finder</a></li>
<li class="toctree-l6"><a class="reference internal" href="../../../../tutorials/packages/gluon/training/learning_rates/learning_rate_schedules.html">Learning Rate Schedules</a></li>
<li class="toctree-l6"><a class="reference internal" href="../../../../tutorials/packages/gluon/training/learning_rates/learning_rate_schedules_advanced.html">Advanced Learning Rate Schedules</a></li>
</ul>
</li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/gluon/training/normalization/index.html">Normalization Blocks</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../../tutorials/packages/kvstore/index.html">KVStore</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/packages/kvstore/kvstore.html">Distributed Key-Value Store</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../../tutorials/packages/legacy/index.html">Legacy</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/packages/legacy/ndarray/index.html">NDArray</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/legacy/ndarray/01-ndarray-intro.html">An Intro: Manipulate Data the MXNet Way with NDArray</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/legacy/ndarray/02-ndarray-operations.html">NDArray Operations</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/legacy/ndarray/03-ndarray-contexts.html">NDArray Contexts</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/legacy/ndarray/gotchas_numpy_in_mxnet.html">Gotchas using NumPy in Apache MXNet</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/packages/legacy/ndarray/sparse/index.html">Tutorials</a><ul>
<li class="toctree-l6"><a class="reference internal" href="../../../../tutorials/packages/legacy/ndarray/sparse/csr.html">CSRNDArray - NDArray in Compressed Sparse Row Storage Format</a></li>
<li class="toctree-l6"><a class="reference internal" href="../../../../tutorials/packages/legacy/ndarray/sparse/row_sparse.html">RowSparseNDArray - NDArray for Sparse Gradient Updates</a></li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../../tutorials/packages/np/index.html">What is NP on MXNet</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/packages/np/cheat-sheet.html">The NP on MXNet cheat sheet</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/packages/np/np-vs-numpy.html">Differences between NP on MXNet and NumPy</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../../tutorials/packages/onnx/index.html">ONNX</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/packages/onnx/fine_tuning_gluon.html">Fine-tuning an ONNX model</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/packages/onnx/inference_on_onnx_model.html">Running inference on MXNet/Gluon from an ONNX model</a></li>
<li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/deploy/export/onnx.html">Export ONNX Models</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../../tutorials/packages/optimizer/index.html">Optimizers</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../tutorials/packages/viz/index.html">Visualization</a><ul>
<li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/faq/visualize_graph">Visualize networks</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../../../tutorials/performance/index.html">Performance</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../../../tutorials/performance/compression/index.html">Compression</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/performance/compression/int8.html">Deploy with int-8</a></li>
<li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/faq/float16">Float16</a></li>
<li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/faq/gradient_compression">Gradient Compression</a></li>
<li class="toctree-l4"><a class="reference external" href="https://gluon-cv.mxnet.io/build/examples_deployment/int8_inference.html">GluonCV with Quantized Models</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../../tutorials/performance/backend/index.html">Accelerated Backend Tools</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/performance/backend/dnnl/index.html">oneDNN</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/performance/backend/dnnl/dnnl_readme.html">Install MXNet with oneDNN</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/performance/backend/dnnl/dnnl_quantization.html">oneDNN Quantization</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../tutorials/performance/backend/dnnl/dnnl_quantization_inc.html">Improving accuracy with Intel® Neural Compressor</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/performance/backend/tvm.html">Use TVM</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/performance/backend/profiler.html">Profiling MXNet Models</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/performance/backend/amp.html">Using AMP: Automatic Mixed Precision</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../../../tutorials/deploy/index.html">Deployment</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../../../tutorials/deploy/export/index.html">Export</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/deploy/export/onnx.html">Exporting to ONNX format</a></li>
<li class="toctree-l4"><a class="reference external" href="https://gluon-cv.mxnet.io/build/examples_deployment/export_network.html">Export Gluon CV Models</a></li>
<li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Save / Load Parameters</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../../tutorials/deploy/inference/index.html">Inference</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/deploy/inference/cpp.html">Deploy into C++</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/deploy/inference/image_classification_jetson.html">Image Classication using pretrained ResNet-50 model on Jetson module</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../../tutorials/deploy/run-on-aws/index.html">Run on AWS</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/deploy/run-on-aws/use_ec2.html">Run on an EC2 Instance</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/deploy/run-on-aws/use_sagemaker.html">Run on Amazon SageMaker</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../tutorials/deploy/run-on-aws/cloud.html">MXNet on the Cloud</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../../../tutorials/extend/index.html">Extend</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../../../tutorials/extend/customop.html">Custom Numpy Operators</a></li>
<li class="toctree-l3"><a class="reference external" href="https://mxnet.apache.org/api/faq/new_op">New Operator Creation</a></li>
<li class="toctree-l3"><a class="reference external" href="https://mxnet.apache.org/api/faq/add_op_in_backend">New Operator in MXNet Backend</a></li>
<li class="toctree-l3"><a class="reference external" href="https://mxnet.apache.org/api/faq/using_rtc">Using RTC for CUDA kernels</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../../../api/index.html">Python API</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/np/index.html">mxnet.np</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/np/arrays.html">Array objects</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/np/arrays.ndarray.html">The N-dimensional array (<code class="xref py py-class docutils literal notranslate"><span class="pre">ndarray</span></code>)</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/np/arrays.indexing.html">Indexing</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/np/routines.html">Routines</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/np/routines.array-creation.html">Array creation routines</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.eye.html">mxnet.np.eye</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.empty.html">mxnet.np.empty</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.full.html">mxnet.np.full</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.identity.html">mxnet.np.identity</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.ones.html">mxnet.np.ones</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.ones_like.html">mxnet.np.ones_like</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.zeros.html">mxnet.np.zeros</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.zeros_like.html">mxnet.np.zeros_like</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.array.html">mxnet.np.array</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.copy.html">mxnet.np.copy</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.arange.html">mxnet.np.arange</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linspace.html">mxnet.np.linspace</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.logspace.html">mxnet.np.logspace</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.meshgrid.html">mxnet.np.meshgrid</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.tril.html">mxnet.np.tril</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/np/routines.array-manipulation.html">Array manipulation routines</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.reshape.html">mxnet.np.reshape</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.ravel.html">mxnet.np.ravel</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.ndarray.flatten.html">mxnet.np.ndarray.flatten</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.swapaxes.html">mxnet.np.swapaxes</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.ndarray.T.html">mxnet.np.ndarray.T</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.transpose.html">mxnet.np.transpose</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.moveaxis.html">mxnet.np.moveaxis</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.rollaxis.html">mxnet.np.rollaxis</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.expand_dims.html">mxnet.np.expand_dims</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.squeeze.html">mxnet.np.squeeze</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.broadcast_to.html">mxnet.np.broadcast_to</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.broadcast_arrays.html">mxnet.np.broadcast_arrays</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.atleast_1d.html">mxnet.np.atleast_1d</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.atleast_2d.html">mxnet.np.atleast_2d</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.atleast_3d.html">mxnet.np.atleast_3d</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.concatenate.html">mxnet.np.concatenate</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.stack.html">mxnet.np.stack</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.dstack.html">mxnet.np.dstack</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.vstack.html">mxnet.np.vstack</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.column_stack.html">mxnet.np.column_stack</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.hstack.html">mxnet.np.hstack</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.split.html">mxnet.np.split</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.hsplit.html">mxnet.np.hsplit</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.vsplit.html">mxnet.np.vsplit</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.array_split.html">mxnet.np.array_split</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.dsplit.html">mxnet.np.dsplit</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.tile.html">mxnet.np.tile</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.repeat.html">mxnet.np.repeat</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.unique.html">mxnet.np.unique</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.delete.html">mxnet.np.delete</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.insert.html">mxnet.np.insert</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.append.html">mxnet.np.append</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.resize.html">mxnet.np.resize</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.trim_zeros.html">mxnet.np.trim_zeros</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.reshape.html">mxnet.np.reshape</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.flip.html">mxnet.np.flip</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.roll.html">mxnet.np.roll</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.rot90.html">mxnet.np.rot90</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.fliplr.html">mxnet.np.fliplr</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.flipud.html">mxnet.np.flipud</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/np/routines.io.html">Input and output</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.genfromtxt.html">mxnet.np.genfromtxt</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.ndarray.tolist.html">mxnet.np.ndarray.tolist</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.set_printoptions.html">mxnet.np.set_printoptions</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/np/routines.linalg.html">Linear algebra (<code class="xref py py-mod docutils literal notranslate"><span class="pre">numpy.linalg</span></code>)</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.dot.html">mxnet.np.dot</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.vdot.html">mxnet.np.vdot</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.inner.html">mxnet.np.inner</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.outer.html">mxnet.np.outer</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.tensordot.html">mxnet.np.tensordot</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.einsum.html">mxnet.np.einsum</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.multi_dot.html">mxnet.np.linalg.multi_dot</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.matmul.html">mxnet.np.matmul</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.matrix_power.html">mxnet.np.linalg.matrix_power</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.kron.html">mxnet.np.kron</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.svd.html">mxnet.np.linalg.svd</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.cholesky.html">mxnet.np.linalg.cholesky</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.qr.html">mxnet.np.linalg.qr</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.eig.html">mxnet.np.linalg.eig</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.eigh.html">mxnet.np.linalg.eigh</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.eigvals.html">mxnet.np.linalg.eigvals</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.eigvalsh.html">mxnet.np.linalg.eigvalsh</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.norm.html">mxnet.np.linalg.norm</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.trace.html">mxnet.np.trace</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.cond.html">mxnet.np.linalg.cond</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.det.html">mxnet.np.linalg.det</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.matrix_rank.html">mxnet.np.linalg.matrix_rank</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.slogdet.html">mxnet.np.linalg.slogdet</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.solve.html">mxnet.np.linalg.solve</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.tensorsolve.html">mxnet.np.linalg.tensorsolve</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.lstsq.html">mxnet.np.linalg.lstsq</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.inv.html">mxnet.np.linalg.inv</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.pinv.html">mxnet.np.linalg.pinv</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.linalg.tensorinv.html">mxnet.np.linalg.tensorinv</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/np/routines.math.html">Mathematical functions</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.sin.html">mxnet.np.sin</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.cos.html">mxnet.np.cos</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.tan.html">mxnet.np.tan</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.arcsin.html">mxnet.np.arcsin</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.arccos.html">mxnet.np.arccos</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.arctan.html">mxnet.np.arctan</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.degrees.html">mxnet.np.degrees</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.radians.html">mxnet.np.radians</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.hypot.html">mxnet.np.hypot</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.arctan2.html">mxnet.np.arctan2</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.deg2rad.html">mxnet.np.deg2rad</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.rad2deg.html">mxnet.np.rad2deg</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.unwrap.html">mxnet.np.unwrap</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.sinh.html">mxnet.np.sinh</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.cosh.html">mxnet.np.cosh</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.tanh.html">mxnet.np.tanh</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.arcsinh.html">mxnet.np.arcsinh</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.arccosh.html">mxnet.np.arccosh</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.arctanh.html">mxnet.np.arctanh</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.rint.html">mxnet.np.rint</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.fix.html">mxnet.np.fix</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.floor.html">mxnet.np.floor</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.ceil.html">mxnet.np.ceil</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.trunc.html">mxnet.np.trunc</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.around.html">mxnet.np.around</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.round_.html">mxnet.np.round_</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.sum.html">mxnet.np.sum</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.prod.html">mxnet.np.prod</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.cumsum.html">mxnet.np.cumsum</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.nanprod.html">mxnet.np.nanprod</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.nansum.html">mxnet.np.nansum</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.cumprod.html">mxnet.np.cumprod</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.nancumprod.html">mxnet.np.nancumprod</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.nancumsum.html">mxnet.np.nancumsum</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.diff.html">mxnet.np.diff</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.ediff1d.html">mxnet.np.ediff1d</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.cross.html">mxnet.np.cross</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.trapz.html">mxnet.np.trapz</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.exp.html">mxnet.np.exp</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.expm1.html">mxnet.np.expm1</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.log.html">mxnet.np.log</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.log10.html">mxnet.np.log10</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.log2.html">mxnet.np.log2</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.log1p.html">mxnet.np.log1p</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.logaddexp.html">mxnet.np.logaddexp</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.i0.html">mxnet.np.i0</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.ldexp.html">mxnet.np.ldexp</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.signbit.html">mxnet.np.signbit</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.copysign.html">mxnet.np.copysign</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.frexp.html">mxnet.np.frexp</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.spacing.html">mxnet.np.spacing</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.lcm.html">mxnet.np.lcm</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.gcd.html">mxnet.np.gcd</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.add.html">mxnet.np.add</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.reciprocal.html">mxnet.np.reciprocal</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.negative.html">mxnet.np.negative</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.divide.html">mxnet.np.divide</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.power.html">mxnet.np.power</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.subtract.html">mxnet.np.subtract</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.mod.html">mxnet.np.mod</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.multiply.html">mxnet.np.multiply</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.true_divide.html">mxnet.np.true_divide</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.remainder.html">mxnet.np.remainder</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.positive.html">mxnet.np.positive</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.float_power.html">mxnet.np.float_power</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.fmod.html">mxnet.np.fmod</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.modf.html">mxnet.np.modf</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.divmod.html">mxnet.np.divmod</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.floor_divide.html">mxnet.np.floor_divide</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.clip.html">mxnet.np.clip</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.sqrt.html">mxnet.np.sqrt</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.cbrt.html">mxnet.np.cbrt</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.square.html">mxnet.np.square</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.absolute.html">mxnet.np.absolute</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.sign.html">mxnet.np.sign</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.maximum.html">mxnet.np.maximum</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.minimum.html">mxnet.np.minimum</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.fabs.html">mxnet.np.fabs</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.heaviside.html">mxnet.np.heaviside</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.fmax.html">mxnet.np.fmax</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.fmin.html">mxnet.np.fmin</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.nan_to_num.html">mxnet.np.nan_to_num</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.interp.html">mxnet.np.interp</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/np/random/index.html">np.random</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.choice.html">mxnet.np.random.choice</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.shuffle.html">mxnet.np.random.shuffle</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.normal.html">mxnet.np.random.normal</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.uniform.html">mxnet.np.random.uniform</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.rand.html">mxnet.np.random.rand</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.randint.html">mxnet.np.random.randint</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.beta.html">mxnet.np.random.beta</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.chisquare.html">mxnet.np.random.chisquare</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.exponential.html">mxnet.np.random.exponential</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.f.html">mxnet.np.random.f</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.gamma.html">mxnet.np.random.gamma</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.gumbel.html">mxnet.np.random.gumbel</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.laplace.html">mxnet.np.random.laplace</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.logistic.html">mxnet.np.random.logistic</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.lognormal.html">mxnet.np.random.lognormal</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.multinomial.html">mxnet.np.random.multinomial</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.multivariate_normal.html">mxnet.np.random.multivariate_normal</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.pareto.html">mxnet.np.random.pareto</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.power.html">mxnet.np.random.power</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.rayleigh.html">mxnet.np.random.rayleigh</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/random/generated/mxnet.np.random.weibull.html">mxnet.np.random.weibull</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/np/routines.sort.html">Sorting, searching, and counting</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.ndarray.sort.html">mxnet.np.ndarray.sort</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.sort.html">mxnet.np.sort</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.lexsort.html">mxnet.np.lexsort</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.argsort.html">mxnet.np.argsort</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.msort.html">mxnet.np.msort</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.partition.html">mxnet.np.partition</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.argpartition.html">mxnet.np.argpartition</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.argmax.html">mxnet.np.argmax</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.argmin.html">mxnet.np.argmin</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.nanargmax.html">mxnet.np.nanargmax</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.nanargmin.html">mxnet.np.nanargmin</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.argwhere.html">mxnet.np.argwhere</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.nonzero.html">mxnet.np.nonzero</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.flatnonzero.html">mxnet.np.flatnonzero</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.where.html">mxnet.np.where</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.searchsorted.html">mxnet.np.searchsorted</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.extract.html">mxnet.np.extract</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.count_nonzero.html">mxnet.np.count_nonzero</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/np/routines.statistics.html">Statistics</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.min.html">mxnet.np.min</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.max.html">mxnet.np.max</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.amin.html">mxnet.np.amin</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.amax.html">mxnet.np.amax</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.nanmin.html">mxnet.np.nanmin</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.nanmax.html">mxnet.np.nanmax</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.ptp.html">mxnet.np.ptp</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.percentile.html">mxnet.np.percentile</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.nanpercentile.html">mxnet.np.nanpercentile</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.quantile.html">mxnet.np.quantile</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.nanquantile.html">mxnet.np.nanquantile</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.mean.html">mxnet.np.mean</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.std.html">mxnet.np.std</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.var.html">mxnet.np.var</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.median.html">mxnet.np.median</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.average.html">mxnet.np.average</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.nanmedian.html">mxnet.np.nanmedian</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.nanstd.html">mxnet.np.nanstd</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.nanvar.html">mxnet.np.nanvar</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.corrcoef.html">mxnet.np.corrcoef</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.correlate.html">mxnet.np.correlate</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.cov.html">mxnet.np.cov</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.histogram.html">mxnet.np.histogram</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.histogram2d.html">mxnet.np.histogram2d</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.histogramdd.html">mxnet.np.histogramdd</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.bincount.html">mxnet.np.bincount</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.histogram_bin_edges.html">mxnet.np.histogram_bin_edges</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/np/generated/mxnet.np.digitize.html">mxnet.np.digitize</a></li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/npx/index.html">NPX: NumPy Neural Network Extension</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.set_np.html">mxnet.npx.set_np</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.reset_np.html">mxnet.npx.reset_np</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.cpu.html">mxnet.npx.cpu</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.cpu_pinned.html">mxnet.npx.cpu_pinned</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.gpu.html">mxnet.npx.gpu</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.gpu_memory_info.html">mxnet.npx.gpu_memory_info</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.current_device.html">mxnet.npx.current_device</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.num_gpus.html">mxnet.npx.num_gpus</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.activation.html">mxnet.npx.activation</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.batch_norm.html">mxnet.npx.batch_norm</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.convolution.html">mxnet.npx.convolution</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.dropout.html">mxnet.npx.dropout</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.embedding.html">mxnet.npx.embedding</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.fully_connected.html">mxnet.npx.fully_connected</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.layer_norm.html">mxnet.npx.layer_norm</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.pooling.html">mxnet.npx.pooling</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.rnn.html">mxnet.npx.rnn</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.leaky_relu.html">mxnet.npx.leaky_relu</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.multibox_detection.html">mxnet.npx.multibox_detection</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.multibox_prior.html">mxnet.npx.multibox_prior</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.multibox_target.html">mxnet.npx.multibox_target</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.roi_pooling.html">mxnet.npx.roi_pooling</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.sigmoid.html">mxnet.npx.sigmoid</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.relu.html">mxnet.npx.relu</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.smooth_l1.html">mxnet.npx.smooth_l1</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.softmax.html">mxnet.npx.softmax</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.log_softmax.html">mxnet.npx.log_softmax</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.topk.html">mxnet.npx.topk</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.waitall.html">mxnet.npx.waitall</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.load.html">mxnet.npx.load</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.save.html">mxnet.npx.save</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.one_hot.html">mxnet.npx.one_hot</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.pick.html">mxnet.npx.pick</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.reshape_like.html">mxnet.npx.reshape_like</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.batch_flatten.html">mxnet.npx.batch_flatten</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.batch_dot.html">mxnet.npx.batch_dot</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.gamma.html">mxnet.npx.gamma</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/npx/generated/mxnet.npx.sequence_mask.html">mxnet.npx.sequence_mask</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/gluon/index.html">mxnet.gluon</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/gluon/block.html">gluon.Block</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/gluon/hybrid_block.html">gluon.HybridBlock</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/gluon/symbol_block.html">gluon.SymbolBlock</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/gluon/constant.html">gluon.Constant</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/gluon/parameter.html">gluon.Parameter</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/gluon/trainer.html">gluon.Trainer</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/gluon/contrib/index.html">gluon.contrib</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/gluon/data/index.html">gluon.data</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/gluon/data/vision/index.html">data.vision</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/gluon/data/vision/datasets/index.html">vision.datasets</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../../api/gluon/data/vision/transforms/index.html">vision.transforms</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/gluon/loss/index.html">gluon.loss</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/gluon/metric/index.html">gluon.metric</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/gluon/model_zoo/index.html">gluon.model_zoo.vision</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/gluon/nn/index.html">gluon.nn</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/gluon/rnn/index.html">gluon.rnn</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/gluon/utils/index.html">gluon.utils</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/autograd/index.html">mxnet.autograd</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/initializer/index.html">mxnet.initializer</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/optimizer/index.html">mxnet.optimizer</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/lr_scheduler/index.html">mxnet.lr_scheduler</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/kvstore/index.html">KVStore: Communication for Distributed Training</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/kvstore/index.html#horovod">Horovod</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/kvstore/generated/mxnet.kvstore.Horovod.html">mxnet.kvstore.Horovod</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/kvstore/index.html#byteps">BytePS</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/kvstore/generated/mxnet.kvstore.BytePS.html">mxnet.kvstore.BytePS</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/kvstore/index.html#kvstore-interface">KVStore Interface</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/kvstore/generated/mxnet.kvstore.KVStore.html">mxnet.kvstore.KVStore</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/kvstore/generated/mxnet.kvstore.KVStoreBase.html">mxnet.kvstore.KVStoreBase</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/kvstore/generated/mxnet.kvstore.KVStoreServer.html">mxnet.kvstore.KVStoreServer</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/contrib/index.html">mxnet.contrib</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/contrib/io/index.html">contrib.io</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/contrib/ndarray/index.html">contrib.ndarray</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/contrib/onnx/index.html">contrib.onnx</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/contrib/quantization/index.html">contrib.quantization</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/contrib/symbol/index.html">contrib.symbol</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/contrib/tensorboard/index.html">contrib.tensorboard</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/contrib/tensorrt/index.html">contrib.tensorrt</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/contrib/text/index.html">contrib.text</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/legacy/index.html">Legacy</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/legacy/callback/index.html">mxnet.callback</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/legacy/image/index.html">mxnet.image</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/legacy/io/index.html">mxnet.io</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/legacy/ndarray/index.html">mxnet.ndarray</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/legacy/ndarray/ndarray.html">ndarray</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/legacy/ndarray/contrib/index.html">ndarray.contrib</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/legacy/ndarray/image/index.html">ndarray.image</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/legacy/ndarray/linalg/index.html">ndarray.linalg</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/legacy/ndarray/op/index.html">ndarray.op</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/legacy/ndarray/random/index.html">ndarray.random</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/legacy/ndarray/register/index.html">ndarray.register</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/legacy/ndarray/sparse/index.html">ndarray.sparse</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/legacy/ndarray/utils/index.html">ndarray.utils</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/legacy/recordio/index.html">mxnet.recordio</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/legacy/symbol/index.html">mxnet.symbol</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/legacy/symbol/symbol.html">symbol</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/legacy/symbol/contrib/index.html">symbol.contrib</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/legacy/symbol/image/index.html">symbol.image</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/legacy/symbol/linalg/index.html">symbol.linalg</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/legacy/symbol/op/index.html">symbol.op</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/legacy/symbol/random/index.html">symbol.random</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/legacy/symbol/register/index.html">symbol.register</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../../api/legacy/symbol/sparse/index.html">symbol.sparse</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/legacy/visualization/index.html">mxnet.visualization</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/device/index.html">mxnet.device</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/engine/index.html">mxnet.engine</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/executor/index.html">mxnet.executor</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/kvstore_server/index.html">mxnet.kvstore_server</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/profiler/index.html">mxnet.profiler</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/rtc/index.html">mxnet.rtc</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/runtime/index.html">mxnet.runtime</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/runtime/generated/mxnet.runtime.Feature.html">mxnet.runtime.Feature</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/runtime/generated/mxnet.runtime.Features.html">mxnet.runtime.Features</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../../api/runtime/generated/mxnet.runtime.feature_list.html">mxnet.runtime.feature_list</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/test_utils/index.html">mxnet.test_utils</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../../../api/util/index.html">mxnet.util</a></li>
</ul>
</li>
</ul>
</nav>
</div>
</header>
<div class="document">
<div class="page-content" role="main">
<h1>Source code for mxnet.gluon.nn.conv_layers</h1><div class="highlight"><pre>
<span></span><span class="c1"># Licensed to the Apache Software Foundation (ASF) under one</span>
<span class="c1"># or more contributor license agreements. See the NOTICE file</span>
<span class="c1"># distributed with this work for additional information</span>
<span class="c1"># regarding copyright ownership. The ASF licenses this file</span>
<span class="c1"># to you under the Apache License, Version 2.0 (the</span>
<span class="c1"># &quot;License&quot;); you may not use this file except in compliance</span>
<span class="c1"># with the License. You may obtain a copy of the License at</span>
<span class="c1">#</span>
<span class="c1"># http://www.apache.org/licenses/LICENSE-2.0</span>
<span class="c1">#</span>
<span class="c1"># Unless required by applicable law or agreed to in writing,</span>
<span class="c1"># software distributed under the License is distributed on an</span>
<span class="c1"># &quot;AS IS&quot; BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY</span>
<span class="c1"># KIND, either express or implied. See the License for the</span>
<span class="c1"># specific language governing permissions and limitations</span>
<span class="c1"># under the License.</span>
<span class="c1"># coding: utf-8</span>
<span class="c1"># pylint: disable= arguments-differ, too-many-lines</span>
<span class="sd">&quot;&quot;&quot;Convolutional neural network layers.&quot;&quot;&quot;</span>
<span class="n">__all__</span> <span class="o">=</span> <span class="p">[</span><span class="s1">&#39;Conv1D&#39;</span><span class="p">,</span> <span class="s1">&#39;Conv2D&#39;</span><span class="p">,</span> <span class="s1">&#39;Conv3D&#39;</span><span class="p">,</span>
<span class="s1">&#39;Conv1DTranspose&#39;</span><span class="p">,</span> <span class="s1">&#39;Conv2DTranspose&#39;</span><span class="p">,</span> <span class="s1">&#39;Conv3DTranspose&#39;</span><span class="p">,</span>
<span class="s1">&#39;MaxPool1D&#39;</span><span class="p">,</span> <span class="s1">&#39;MaxPool2D&#39;</span><span class="p">,</span> <span class="s1">&#39;MaxPool3D&#39;</span><span class="p">,</span>
<span class="s1">&#39;AvgPool1D&#39;</span><span class="p">,</span> <span class="s1">&#39;AvgPool2D&#39;</span><span class="p">,</span> <span class="s1">&#39;AvgPool3D&#39;</span><span class="p">,</span>
<span class="s1">&#39;GlobalMaxPool1D&#39;</span><span class="p">,</span> <span class="s1">&#39;GlobalMaxPool2D&#39;</span><span class="p">,</span> <span class="s1">&#39;GlobalMaxPool3D&#39;</span><span class="p">,</span>
<span class="s1">&#39;GlobalAvgPool1D&#39;</span><span class="p">,</span> <span class="s1">&#39;GlobalAvgPool2D&#39;</span><span class="p">,</span> <span class="s1">&#39;GlobalAvgPool3D&#39;</span><span class="p">,</span>
<span class="s1">&#39;ReflectionPad2D&#39;</span><span class="p">,</span> <span class="s1">&#39;DeformableConvolution&#39;</span><span class="p">,</span> <span class="s1">&#39;ModulatedDeformableConvolution&#39;</span><span class="p">,</span>
<span class="s1">&#39;PixelShuffle1D&#39;</span><span class="p">,</span> <span class="s1">&#39;PixelShuffle2D&#39;</span><span class="p">,</span> <span class="s1">&#39;PixelShuffle3D&#39;</span><span class="p">]</span>
<span class="kn">from</span> <span class="nn">..block</span> <span class="kn">import</span> <span class="n">HybridBlock</span>
<span class="kn">from</span> <span class="nn">..parameter</span> <span class="kn">import</span> <span class="n">Parameter</span>
<span class="kn">from</span> <span class="nn">...</span> <span class="kn">import</span> <span class="n">np</span><span class="p">,</span> <span class="n">npx</span>
<span class="kn">from</span> <span class="nn">...base</span> <span class="kn">import</span> <span class="n">numeric_types</span>
<span class="kn">from</span> <span class="nn">.activations</span> <span class="kn">import</span> <span class="n">Activation</span>
<span class="kn">from</span> <span class="nn">...util</span> <span class="kn">import</span> <span class="n">use_np</span>
<span class="nd">@use_np</span>
<span class="k">class</span> <span class="nc">_Conv</span><span class="p">(</span><span class="n">HybridBlock</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;Abstract nD convolution layer (private, used as implementation base).</span>
<span class="sd"> This layer creates a convolution kernel that is convolved</span>
<span class="sd"> with the layer input to produce a tensor of outputs.</span>
<span class="sd"> If `use_bias` is `True`, a bias vector is created and added to the outputs.</span>
<span class="sd"> Finally, if `activation` is not `None`,</span>
<span class="sd"> it is applied to the outputs as well.</span>
<span class="sd"> Parameters</span>
<span class="sd"> ----------</span>
<span class="sd"> channels : int</span>
<span class="sd"> The dimensionality of the output space</span>
<span class="sd"> i.e. the number of output channels in the convolution.</span>
<span class="sd"> kernel_size : int or tuple/list of n ints</span>
<span class="sd"> Specifies the dimensions of the convolution window.</span>
<span class="sd"> strides: int or tuple/list of n ints,</span>
<span class="sd"> Specifies the strides of the convolution.</span>
<span class="sd"> padding : int or tuple/list of n ints,</span>
<span class="sd"> If padding is non-zero, then the input is implicitly zero-padded</span>
<span class="sd"> on both sides for padding number of points</span>
<span class="sd"> dilation: int or tuple/list of n ints,</span>
<span class="sd"> Specifies the dilation rate to use for dilated convolution.</span>
<span class="sd"> groups : int</span>
<span class="sd"> Controls the connections between inputs and outputs.</span>
<span class="sd"> At groups=1, all inputs are convolved to all outputs.</span>
<span class="sd"> At groups=2, the operation becomes equivalent to having two convolution</span>
<span class="sd"> layers side by side, each seeing half the input channels, and producing</span>
<span class="sd"> half the output channels, and both subsequently concatenated.</span>
<span class="sd"> layout : str,</span>
<span class="sd"> Dimension ordering of data and weight. Can be &#39;NCW&#39;, &#39;NWC&#39;, &#39;NCHW&#39;,</span>
<span class="sd"> &#39;NHWC&#39;, &#39;NCDHW&#39;, &#39;NDHWC&#39;, etc. &#39;N&#39;, &#39;C&#39;, &#39;H&#39;, &#39;W&#39;, &#39;D&#39; stands for</span>
<span class="sd"> batch, channel, height, width and depth dimensions respectively.</span>
<span class="sd"> Convolution is performed over &#39;D&#39;, &#39;H&#39;, and &#39;W&#39; dimensions.</span>
<span class="sd"> in_channels : int, default 0</span>
<span class="sd"> The number of input channels to this layer. If not specified,</span>
<span class="sd"> initialization will be deferred to the first time `forward` is called</span>
<span class="sd"> and `in_channels` will be inferred from the shape of input data.</span>
<span class="sd"> activation : str</span>
<span class="sd"> Activation function to use. See :func:`~mxnet.npx.activation`.</span>
<span class="sd"> If you don&#39;t specify anything, no activation is applied</span>
<span class="sd"> (ie. &quot;linear&quot; activation: `a(x) = x`).</span>
<span class="sd"> use_bias: bool</span>
<span class="sd"> Whether the layer uses a bias vector.</span>
<span class="sd"> weight_initializer : str or `Initializer`</span>
<span class="sd"> Initializer for the `weight` weights matrix.</span>
<span class="sd"> bias_initializer: str or `Initializer`</span>
<span class="sd"> Initializer for the bias vector.</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">channels</span><span class="p">,</span> <span class="n">kernel_size</span><span class="p">,</span> <span class="n">strides</span><span class="p">,</span> <span class="n">padding</span><span class="p">,</span> <span class="n">dilation</span><span class="p">,</span>
<span class="n">groups</span><span class="p">,</span> <span class="n">layout</span><span class="p">,</span> <span class="n">in_channels</span><span class="o">=</span><span class="mi">0</span><span class="p">,</span> <span class="n">activation</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">use_bias</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span>
<span class="n">weight_initializer</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">bias_initializer</span><span class="o">=</span><span class="s1">&#39;zeros&#39;</span><span class="p">,</span>
<span class="n">op_name</span><span class="o">=</span><span class="s1">&#39;convolution&#39;</span><span class="p">,</span> <span class="n">adj</span><span class="o">=</span><span class="kc">None</span><span class="p">):</span>
<span class="nb">super</span><span class="p">(</span><span class="n">_Conv</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="fm">__init__</span><span class="p">()</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_channels</span> <span class="o">=</span> <span class="n">channels</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_in_channels</span> <span class="o">=</span> <span class="n">in_channels</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_kernel_size</span> <span class="o">=</span> <span class="n">kernel_size</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_layout</span> <span class="o">=</span> <span class="n">layout</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_groups</span> <span class="o">=</span> <span class="n">groups</span>
<span class="k">if</span> <span class="nb">isinstance</span><span class="p">(</span><span class="n">strides</span><span class="p">,</span> <span class="n">numeric_types</span><span class="p">):</span>
<span class="n">strides</span> <span class="o">=</span> <span class="p">(</span><span class="n">strides</span><span class="p">,)</span><span class="o">*</span><span class="nb">len</span><span class="p">(</span><span class="n">kernel_size</span><span class="p">)</span>
<span class="k">if</span> <span class="nb">isinstance</span><span class="p">(</span><span class="n">padding</span><span class="p">,</span> <span class="n">numeric_types</span><span class="p">):</span>
<span class="n">padding</span> <span class="o">=</span> <span class="p">(</span><span class="n">padding</span><span class="p">,)</span><span class="o">*</span><span class="nb">len</span><span class="p">(</span><span class="n">kernel_size</span><span class="p">)</span>
<span class="k">if</span> <span class="nb">isinstance</span><span class="p">(</span><span class="n">dilation</span><span class="p">,</span> <span class="n">numeric_types</span><span class="p">):</span>
<span class="n">dilation</span> <span class="o">=</span> <span class="p">(</span><span class="n">dilation</span><span class="p">,)</span><span class="o">*</span><span class="nb">len</span><span class="p">(</span><span class="n">kernel_size</span><span class="p">)</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_op_name</span> <span class="o">=</span> <span class="n">op_name</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_kwargs</span> <span class="o">=</span> <span class="p">{</span>
<span class="s1">&#39;kernel&#39;</span><span class="p">:</span> <span class="n">kernel_size</span><span class="p">,</span> <span class="s1">&#39;stride&#39;</span><span class="p">:</span> <span class="n">strides</span><span class="p">,</span> <span class="s1">&#39;dilate&#39;</span><span class="p">:</span> <span class="n">dilation</span><span class="p">,</span>
<span class="s1">&#39;pad&#39;</span><span class="p">:</span> <span class="n">padding</span><span class="p">,</span> <span class="s1">&#39;num_filter&#39;</span><span class="p">:</span> <span class="n">channels</span><span class="p">,</span> <span class="s1">&#39;num_group&#39;</span><span class="p">:</span> <span class="n">groups</span><span class="p">,</span>
<span class="s1">&#39;no_bias&#39;</span><span class="p">:</span> <span class="ow">not</span> <span class="n">use_bias</span><span class="p">,</span> <span class="s1">&#39;layout&#39;</span><span class="p">:</span> <span class="n">layout</span><span class="p">}</span>
<span class="k">if</span> <span class="n">adj</span> <span class="ow">is</span> <span class="ow">not</span> <span class="kc">None</span><span class="p">:</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_kwargs</span><span class="p">[</span><span class="s1">&#39;adj&#39;</span><span class="p">]</span> <span class="o">=</span> <span class="n">adj</span>
<span class="bp">self</span><span class="o">.</span><span class="n">weight</span> <span class="o">=</span> <span class="n">Parameter</span><span class="p">(</span><span class="s1">&#39;weight&#39;</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">pre_infer</span><span class="p">(),</span>
<span class="n">init</span><span class="o">=</span><span class="n">weight_initializer</span><span class="p">,</span>
<span class="n">allow_deferred_init</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<span class="k">if</span> <span class="n">use_bias</span><span class="p">:</span>
<span class="bp">self</span><span class="o">.</span><span class="n">bias</span> <span class="o">=</span> <span class="n">Parameter</span><span class="p">(</span><span class="s1">&#39;bias&#39;</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">(</span><span class="n">channels</span><span class="p">,),</span>
<span class="n">init</span><span class="o">=</span><span class="n">bias_initializer</span><span class="p">,</span>
<span class="n">allow_deferred_init</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<span class="k">else</span><span class="p">:</span>
<span class="bp">self</span><span class="o">.</span><span class="n">bias</span> <span class="o">=</span> <span class="kc">None</span>
<span class="k">if</span> <span class="n">activation</span> <span class="ow">is</span> <span class="ow">not</span> <span class="kc">None</span><span class="p">:</span>
<span class="bp">self</span><span class="o">.</span><span class="n">act</span> <span class="o">=</span> <span class="n">Activation</span><span class="p">(</span><span class="n">activation</span><span class="p">)</span>
<span class="k">else</span><span class="p">:</span>
<span class="bp">self</span><span class="o">.</span><span class="n">act</span> <span class="o">=</span> <span class="kc">None</span>
<span class="k">def</span> <span class="nf">forward</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">x</span><span class="p">):</span>
<span class="n">device</span> <span class="o">=</span> <span class="n">x</span><span class="o">.</span><span class="n">device</span>
<span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">bias</span> <span class="ow">is</span> <span class="kc">None</span><span class="p">:</span>
<span class="n">act</span> <span class="o">=</span> <span class="nb">getattr</span><span class="p">(</span><span class="n">npx</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">_op_name</span><span class="p">)(</span><span class="n">x</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">weight</span><span class="o">.</span><span class="n">data</span><span class="p">(</span><span class="n">device</span><span class="p">),</span> <span class="o">**</span><span class="bp">self</span><span class="o">.</span><span class="n">_kwargs</span><span class="p">)</span>
<span class="k">else</span><span class="p">:</span>
<span class="n">act</span> <span class="o">=</span> <span class="nb">getattr</span><span class="p">(</span><span class="n">npx</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">_op_name</span><span class="p">)(</span><span class="n">x</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">weight</span><span class="o">.</span><span class="n">data</span><span class="p">(</span><span class="n">device</span><span class="p">),</span> <span class="bp">self</span><span class="o">.</span><span class="n">bias</span><span class="o">.</span><span class="n">data</span><span class="p">(</span><span class="n">device</span><span class="p">),</span>
<span class="o">**</span><span class="bp">self</span><span class="o">.</span><span class="n">_kwargs</span><span class="p">)</span>
<span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">act</span> <span class="ow">is</span> <span class="ow">not</span> <span class="kc">None</span><span class="p">:</span>
<span class="n">act</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">act</span><span class="p">(</span><span class="n">act</span><span class="p">)</span>
<span class="k">return</span> <span class="n">act</span>
<span class="k">def</span> <span class="nf">pre_infer</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;</span>
<span class="sd"> Pre-infer the shape of weight parameter based on kernel size, group size and channels</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="n">wshape</span> <span class="o">=</span> <span class="p">[</span><span class="o">-</span><span class="mi">1</span><span class="p">]</span><span class="o">*</span><span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">_kernel_size</span><span class="p">)</span> <span class="o">+</span> <span class="mi">2</span><span class="p">)</span>
<span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">_op_name</span> <span class="o">==</span> <span class="s2">&quot;convolution&quot;</span><span class="p">:</span>
<span class="k">if</span> <span class="nb">len</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">_kernel_size</span><span class="p">)</span> <span class="o">==</span> <span class="mi">1</span><span class="p">:</span>
<span class="n">wshape</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;N&#39;</span><span class="p">)]</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_channels</span> <span class="o">//</span> <span class="bp">self</span><span class="o">.</span><span class="n">_groups</span>
<span class="n">wshape</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;W&#39;</span><span class="p">)]</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_kernel_size</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span>
<span class="n">wshape</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="o">*=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_groups</span>
<span class="k">elif</span> <span class="nb">len</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">_kernel_size</span><span class="p">)</span> <span class="o">==</span> <span class="mi">2</span><span class="p">:</span>
<span class="n">wshape</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;N&#39;</span><span class="p">)]</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_channels</span> <span class="o">//</span> <span class="bp">self</span><span class="o">.</span><span class="n">_groups</span>
<span class="n">wshape</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;H&#39;</span><span class="p">)]</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_kernel_size</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span>
<span class="n">wshape</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;W&#39;</span><span class="p">)]</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_kernel_size</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span>
<span class="n">wshape</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="o">*=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_groups</span>
<span class="k">else</span><span class="p">:</span>
<span class="k">assert</span> <span class="nb">len</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">_kernel_size</span><span class="p">)</span> <span class="o">==</span> <span class="mi">3</span><span class="p">,</span> <span class="s2">&quot;kernel_size must be 1, 2 or 3&quot;</span>
<span class="n">wshape</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;N&#39;</span><span class="p">)]</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_channels</span> <span class="o">//</span> <span class="bp">self</span><span class="o">.</span><span class="n">_groups</span>
<span class="n">wshape</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;D&#39;</span><span class="p">)]</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_kernel_size</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span>
<span class="n">wshape</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;H&#39;</span><span class="p">)]</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_kernel_size</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span>
<span class="n">wshape</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;W&#39;</span><span class="p">)]</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_kernel_size</span><span class="p">[</span><span class="mi">2</span><span class="p">]</span>
<span class="n">wshape</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="o">*=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_groups</span>
<span class="k">else</span><span class="p">:</span>
<span class="k">assert</span> <span class="bp">self</span><span class="o">.</span><span class="n">_op_name</span> <span class="o">==</span> <span class="s2">&quot;deconvolution&quot;</span><span class="p">,</span> \
<span class="s2">&quot;Only support operator name with convolution and deconvolution&quot;</span>
<span class="k">if</span> <span class="nb">len</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">_kernel_size</span><span class="p">)</span> <span class="o">==</span> <span class="mi">1</span><span class="p">:</span>
<span class="n">wshape</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;C&#39;</span><span class="p">)]</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_channels</span> <span class="o">//</span> <span class="bp">self</span><span class="o">.</span><span class="n">_groups</span>
<span class="n">wshape</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;W&#39;</span><span class="p">)]</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_kernel_size</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span>
<span class="k">elif</span> <span class="nb">len</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">_kernel_size</span><span class="p">)</span> <span class="o">==</span> <span class="mi">2</span><span class="p">:</span>
<span class="n">wshape</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;C&#39;</span><span class="p">)]</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_channels</span> <span class="o">//</span> <span class="bp">self</span><span class="o">.</span><span class="n">_groups</span>
<span class="n">wshape</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;H&#39;</span><span class="p">)]</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_kernel_size</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span>
<span class="n">wshape</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;W&#39;</span><span class="p">)]</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_kernel_size</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span>
<span class="k">else</span><span class="p">:</span>
<span class="k">assert</span> <span class="nb">len</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">_kernel_size</span><span class="p">)</span> <span class="o">==</span> <span class="mi">3</span><span class="p">,</span> <span class="s2">&quot;kernel_size must be 1, 2 or 3&quot;</span>
<span class="n">wshape</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;C&#39;</span><span class="p">)]</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_channels</span> <span class="o">//</span> <span class="bp">self</span><span class="o">.</span><span class="n">_groups</span>
<span class="n">wshape</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;D&#39;</span><span class="p">)]</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_kernel_size</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span>
<span class="n">wshape</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;H&#39;</span><span class="p">)]</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_kernel_size</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span>
<span class="n">wshape</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;W&#39;</span><span class="p">)]</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_kernel_size</span><span class="p">[</span><span class="mi">2</span><span class="p">]</span>
<span class="k">return</span> <span class="nb">tuple</span><span class="p">(</span><span class="n">wshape</span><span class="p">)</span>
<span class="k">def</span> <span class="nf">infer_shape</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">x</span><span class="p">):</span>
<span class="n">dshape1</span> <span class="o">=</span> <span class="n">x</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;C&#39;</span><span class="p">)]</span>
<span class="n">wshape</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">weight</span><span class="o">.</span><span class="n">shape</span>
<span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">_op_name</span> <span class="o">==</span> <span class="s2">&quot;convolution&quot;</span><span class="p">:</span>
<span class="n">wshape_list</span> <span class="o">=</span> <span class="nb">list</span><span class="p">(</span><span class="n">wshape</span><span class="p">)</span>
<span class="n">wshape_list</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;C&#39;</span><span class="p">)]</span> <span class="o">=</span> <span class="n">dshape1</span> <span class="o">//</span> <span class="bp">self</span><span class="o">.</span><span class="n">_groups</span>
<span class="k">else</span><span class="p">:</span>
<span class="k">assert</span> <span class="bp">self</span><span class="o">.</span><span class="n">_op_name</span> <span class="o">==</span> <span class="s2">&quot;deconvolution&quot;</span><span class="p">,</span> \
<span class="s2">&quot;Only support operator name with convolution and deconvolution&quot;</span>
<span class="n">wshape_list</span> <span class="o">=</span> <span class="nb">list</span><span class="p">(</span><span class="n">wshape</span><span class="p">)</span>
<span class="n">wshape_list</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;N&#39;</span><span class="p">)]</span> <span class="o">=</span> <span class="n">dshape1</span>
<span class="bp">self</span><span class="o">.</span><span class="n">weight</span><span class="o">.</span><span class="n">shape</span> <span class="o">=</span> <span class="nb">tuple</span><span class="p">(</span><span class="n">wshape_list</span><span class="p">)</span>
<span class="k">def</span> <span class="nf">_alias</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
<span class="k">return</span> <span class="s1">&#39;conv&#39;</span>
<span class="k">def</span> <span class="fm">__repr__</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
<span class="n">s</span> <span class="o">=</span> <span class="s1">&#39;</span><span class="si">{name}</span><span class="s1">(</span><span class="si">{mapping}</span><span class="s1">, kernel_size=</span><span class="si">{kernel}</span><span class="s1">, stride=</span><span class="si">{stride}</span><span class="s1">&#39;</span>
<span class="n">len_kernel_size</span> <span class="o">=</span> <span class="nb">len</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">_kwargs</span><span class="p">[</span><span class="s1">&#39;kernel&#39;</span><span class="p">])</span>
<span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">_kwargs</span><span class="p">[</span><span class="s1">&#39;pad&#39;</span><span class="p">]</span> <span class="o">!=</span> <span class="p">(</span><span class="mi">0</span><span class="p">,)</span> <span class="o">*</span> <span class="n">len_kernel_size</span><span class="p">:</span>
<span class="n">s</span> <span class="o">+=</span> <span class="s1">&#39;, padding=</span><span class="si">{pad}</span><span class="s1">&#39;</span>
<span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">_kwargs</span><span class="p">[</span><span class="s1">&#39;dilate&#39;</span><span class="p">]</span> <span class="o">!=</span> <span class="p">(</span><span class="mi">1</span><span class="p">,)</span> <span class="o">*</span> <span class="n">len_kernel_size</span><span class="p">:</span>
<span class="n">s</span> <span class="o">+=</span> <span class="s1">&#39;, dilation=</span><span class="si">{dilate}</span><span class="s1">&#39;</span>
<span class="k">if</span> <span class="nb">hasattr</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="s1">&#39;out_pad&#39;</span><span class="p">)</span> <span class="ow">and</span> <span class="bp">self</span><span class="o">.</span><span class="n">out_pad</span> <span class="o">!=</span> <span class="p">(</span><span class="mi">0</span><span class="p">,)</span> <span class="o">*</span> <span class="n">len_kernel_size</span><span class="p">:</span>
<span class="n">s</span> <span class="o">+=</span> <span class="s1">&#39;, output_padding=</span><span class="si">{out_pad}</span><span class="s1">&#39;</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">out_pad</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">out_pad</span><span class="p">)</span>
<span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">_kwargs</span><span class="p">[</span><span class="s1">&#39;num_group&#39;</span><span class="p">]</span> <span class="o">!=</span> <span class="mi">1</span><span class="p">:</span>
<span class="n">s</span> <span class="o">+=</span> <span class="s1">&#39;, groups=</span><span class="si">{num_group}</span><span class="s1">&#39;</span>
<span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">bias</span> <span class="ow">is</span> <span class="kc">None</span><span class="p">:</span>
<span class="n">s</span> <span class="o">+=</span> <span class="s1">&#39;, bias=False&#39;</span>
<span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">act</span><span class="p">:</span>
<span class="n">s</span> <span class="o">+=</span> <span class="s1">&#39;, </span><span class="si">{}</span><span class="s1">&#39;</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">act</span><span class="p">)</span>
<span class="n">s</span> <span class="o">+=</span> <span class="s1">&#39;)&#39;</span>
<span class="n">shape</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">weight</span><span class="o">.</span><span class="n">shape</span>
<span class="k">if</span> <span class="s1">&#39;Transpose&#39;</span> <span class="ow">in</span> <span class="bp">self</span><span class="o">.</span><span class="vm">__class__</span><span class="o">.</span><span class="vm">__name__</span><span class="p">:</span>
<span class="n">mapping</span> <span class="o">=</span> <span class="s1">&#39;</span><span class="si">{1}</span><span class="s1"> -&gt; </span><span class="si">{0}</span><span class="s1">&#39;</span>
<span class="k">else</span><span class="p">:</span>
<span class="n">mapping</span> <span class="o">=</span> <span class="s1">&#39;</span><span class="si">{0}</span><span class="s1"> -&gt; </span><span class="si">{1}</span><span class="s1">&#39;</span>
<span class="k">return</span> <span class="n">s</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="vm">__class__</span><span class="o">.</span><span class="vm">__name__</span><span class="p">,</span>
<span class="n">mapping</span><span class="o">=</span><span class="n">mapping</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">shape</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span> <span class="k">if</span> <span class="n">shape</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span> <span class="k">else</span> <span class="kc">None</span><span class="p">,</span> <span class="n">shape</span><span class="p">[</span><span class="mi">0</span><span class="p">]),</span>
<span class="o">**</span><span class="bp">self</span><span class="o">.</span><span class="n">_kwargs</span><span class="p">)</span>
<div class="viewcode-block" id="Conv1D"><a class="viewcode-back" href="../../../../api/gluon/nn/index.html#mxnet.gluon.nn.Conv1D">[docs]</a><span class="k">class</span> <span class="nc">Conv1D</span><span class="p">(</span><span class="n">_Conv</span><span class="p">):</span>
<span class="w"> </span><span class="sa">r</span><span class="sd">&quot;&quot;&quot;1D convolution layer (e.g. temporal convolution).</span>
<span class="sd"> This layer creates a convolution kernel that is convolved</span>
<span class="sd"> with the layer input over a single spatial (or temporal) dimension</span>
<span class="sd"> to produce a tensor of outputs.</span>
<span class="sd"> If `use_bias` is True, a bias vector is created and added to the outputs.</span>
<span class="sd"> Finally, if `activation` is not `None`,</span>
<span class="sd"> it is applied to the outputs as well.</span>
<span class="sd"> If `in_channels` is not specified, `Parameter` initialization will be</span>
<span class="sd"> deferred to the first time `forward` is called and `in_channels` will be</span>
<span class="sd"> inferred from the shape of input data.</span>
<span class="sd"> Parameters</span>
<span class="sd"> ----------</span>
<span class="sd"> channels : int</span>
<span class="sd"> The dimensionality of the output space, i.e. the number of output</span>
<span class="sd"> channels (filters) in the convolution.</span>
<span class="sd"> kernel_size :int or tuple/list of 1 int</span>
<span class="sd"> Specifies the dimensions of the convolution window.</span>
<span class="sd"> strides : int or tuple/list of 1 int,</span>
<span class="sd"> Specify the strides of the convolution.</span>
<span class="sd"> padding : int or a tuple/list of 1 int,</span>
<span class="sd"> If padding is non-zero, then the input is implicitly zero-padded</span>
<span class="sd"> on both sides for padding number of points</span>
<span class="sd"> dilation : int or tuple/list of 1 int</span>
<span class="sd"> Specifies the dilation rate to use for dilated convolution.</span>
<span class="sd"> groups : int</span>
<span class="sd"> Controls the connections between inputs and outputs.</span>
<span class="sd"> At groups=1, all inputs are convolved to all outputs.</span>
<span class="sd"> At groups=2, the operation becomes equivalent to having two conv</span>
<span class="sd"> layers side by side, each seeing half the input channels, and producing</span>
<span class="sd"> half the output channels, and both subsequently concatenated.</span>
<span class="sd"> layout: str, default &#39;NCW&#39;</span>
<span class="sd"> Dimension ordering of data and weight. Only supports &#39;NCW&#39; layout for now.</span>
<span class="sd"> &#39;N&#39;, &#39;C&#39;, &#39;W&#39; stands for batch, channel, and width (time) dimensions</span>
<span class="sd"> respectively. Convolution is applied on the &#39;W&#39; dimension.</span>
<span class="sd"> in_channels : int, default 0</span>
<span class="sd"> The number of input channels to this layer. If not specified,</span>
<span class="sd"> initialization will be deferred to the first time `forward` is called</span>
<span class="sd"> and `in_channels` will be inferred from the shape of input data.</span>
<span class="sd"> activation : str</span>
<span class="sd"> Activation function to use. See :func:`~mxnet.npx.activation`.</span>
<span class="sd"> If you don&#39;t specify anything, no activation is applied</span>
<span class="sd"> (ie. &quot;linear&quot; activation: `a(x) = x`).</span>
<span class="sd"> use_bias : bool</span>
<span class="sd"> Whether the layer uses a bias vector.</span>
<span class="sd"> weight_initializer : str or `Initializer`</span>
<span class="sd"> Initializer for the `weight` weights matrix.</span>
<span class="sd"> bias_initializer : str or `Initializer`</span>
<span class="sd"> Initializer for the bias vector.</span>
<span class="sd"> Inputs:</span>
<span class="sd"> - **data**: 3D input tensor with shape `(batch_size, in_channels, width)`</span>
<span class="sd"> when `layout` is `NCW`. For other layouts shape is permuted accordingly.</span>
<span class="sd"> Outputs:</span>
<span class="sd"> - **out**: 3D output tensor with shape `(batch_size, channels, out_width)`</span>
<span class="sd"> when `layout` is `NCW`. out_width is calculated as::</span>
<span class="sd"> out_width = floor((width+2*padding-dilation*(kernel_size-1)-1)/stride)+1</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">channels</span><span class="p">,</span> <span class="n">kernel_size</span><span class="p">,</span> <span class="n">strides</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">padding</span><span class="o">=</span><span class="mi">0</span><span class="p">,</span> <span class="n">dilation</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span>
<span class="n">groups</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">layout</span><span class="o">=</span><span class="s1">&#39;NCW&#39;</span><span class="p">,</span> <span class="n">activation</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">use_bias</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span>
<span class="n">weight_initializer</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">bias_initializer</span><span class="o">=</span><span class="s1">&#39;zeros&#39;</span><span class="p">,</span>
<span class="n">in_channels</span><span class="o">=</span><span class="mi">0</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span>
<span class="k">assert</span> <span class="n">layout</span> <span class="o">==</span> <span class="s1">&#39;NCW&#39;</span><span class="p">,</span> <span class="s2">&quot;Only supports &#39;NCW&#39; layout for now&quot;</span>
<span class="k">if</span> <span class="nb">isinstance</span><span class="p">(</span><span class="n">kernel_size</span><span class="p">,</span> <span class="n">numeric_types</span><span class="p">):</span>
<span class="n">kernel_size</span> <span class="o">=</span> <span class="p">(</span><span class="n">kernel_size</span><span class="p">,)</span>
<span class="k">assert</span> <span class="nb">len</span><span class="p">(</span><span class="n">kernel_size</span><span class="p">)</span> <span class="o">==</span> <span class="mi">1</span><span class="p">,</span> <span class="s2">&quot;kernel_size must be a number or a list of 1 ints&quot;</span>
<span class="n">op_name</span> <span class="o">=</span> <span class="s1">&#39;convolution&#39;</span>
<span class="nb">super</span><span class="p">(</span><span class="n">Conv1D</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="fm">__init__</span><span class="p">(</span>
<span class="n">channels</span><span class="p">,</span> <span class="n">kernel_size</span><span class="p">,</span> <span class="n">strides</span><span class="p">,</span> <span class="n">padding</span><span class="p">,</span> <span class="n">dilation</span><span class="p">,</span> <span class="n">groups</span><span class="p">,</span> <span class="n">layout</span><span class="p">,</span>
<span class="n">in_channels</span><span class="p">,</span> <span class="n">activation</span><span class="p">,</span> <span class="n">use_bias</span><span class="p">,</span> <span class="n">weight_initializer</span><span class="p">,</span> <span class="n">bias_initializer</span><span class="p">,</span>
<span class="n">op_name</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">)</span></div>
<div class="viewcode-block" id="Conv2D"><a class="viewcode-back" href="../../../../api/gluon/nn/index.html#mxnet.gluon.nn.Conv2D">[docs]</a><span class="k">class</span> <span class="nc">Conv2D</span><span class="p">(</span><span class="n">_Conv</span><span class="p">):</span>
<span class="w"> </span><span class="sa">r</span><span class="sd">&quot;&quot;&quot;2D convolution layer (e.g. spatial convolution over images).</span>
<span class="sd"> This layer creates a convolution kernel that is convolved</span>
<span class="sd"> with the layer input to produce a tensor of</span>
<span class="sd"> outputs. If `use_bias` is True,</span>
<span class="sd"> a bias vector is created and added to the outputs. Finally, if</span>
<span class="sd"> `activation` is not `None`, it is applied to the outputs as well.</span>
<span class="sd"> If `in_channels` is not specified, `Parameter` initialization will be</span>
<span class="sd"> deferred to the first time `forward` is called and `in_channels` will be</span>
<span class="sd"> inferred from the shape of input data.</span>
<span class="sd"> Parameters</span>
<span class="sd"> ----------</span>
<span class="sd"> channels : int</span>
<span class="sd"> The dimensionality of the output space, i.e. the number of output</span>
<span class="sd"> channels (filters) in the convolution.</span>
<span class="sd"> kernel_size :int or tuple/list of 2 int</span>
<span class="sd"> Specifies the dimensions of the convolution window.</span>
<span class="sd"> strides : int or tuple/list of 2 int,</span>
<span class="sd"> Specify the strides of the convolution.</span>
<span class="sd"> padding : int or a tuple/list of 2 int,</span>
<span class="sd"> If padding is non-zero, then the input is implicitly zero-padded</span>
<span class="sd"> on both sides for padding number of points</span>
<span class="sd"> dilation : int or tuple/list of 2 int</span>
<span class="sd"> Specifies the dilation rate to use for dilated convolution.</span>
<span class="sd"> groups : int</span>
<span class="sd"> Controls the connections between inputs and outputs.</span>
<span class="sd"> At groups=1, all inputs are convolved to all outputs.</span>
<span class="sd"> At groups=2, the operation becomes equivalent to having two conv</span>
<span class="sd"> layers side by side, each seeing half the input channels, and producing</span>
<span class="sd"> half the output channels, and both subsequently concatenated.</span>
<span class="sd"> layout : str, default &#39;NCHW&#39;</span>
<span class="sd"> Dimension ordering of data and weight. Only supports &#39;NCHW&#39; and &#39;NHWC&#39;</span>
<span class="sd"> layout for now. &#39;N&#39;, &#39;C&#39;, &#39;H&#39;, &#39;W&#39; stands for batch, channel, height,</span>
<span class="sd"> and width dimensions respectively. Convolution is applied on the &#39;H&#39; and</span>
<span class="sd"> &#39;W&#39; dimensions.</span>
<span class="sd"> in_channels : int, default 0</span>
<span class="sd"> The number of input channels to this layer. If not specified,</span>
<span class="sd"> initialization will be deferred to the first time `forward` is called</span>
<span class="sd"> and `in_channels` will be inferred from the shape of input data.</span>
<span class="sd"> activation : str</span>
<span class="sd"> Activation function to use. See :func:`~mxnet.npx.activation`.</span>
<span class="sd"> If you don&#39;t specify anything, no activation is applied</span>
<span class="sd"> (ie. &quot;linear&quot; activation: `a(x) = x`).</span>
<span class="sd"> use_bias : bool</span>
<span class="sd"> Whether the layer uses a bias vector.</span>
<span class="sd"> weight_initializer : str or `Initializer`</span>
<span class="sd"> Initializer for the `weight` weights matrix.</span>
<span class="sd"> bias_initializer : str or `Initializer`</span>
<span class="sd"> Initializer for the bias vector.</span>
<span class="sd"> Inputs:</span>
<span class="sd"> - **data**: 4D input tensor with shape</span>
<span class="sd"> `(batch_size, in_channels, height, width)` when `layout` is `NCHW`.</span>
<span class="sd"> For other layouts shape is permuted accordingly.</span>
<span class="sd"> Outputs:</span>
<span class="sd"> - **out**: 4D output tensor with shape</span>
<span class="sd"> `(batch_size, channels, out_height, out_width)` when `layout` is `NCHW`.</span>
<span class="sd"> out_height and out_width are calculated as::</span>
<span class="sd"> out_height = floor((height+2*padding[0]-dilation[0]*(kernel_size[0]-1)-1)/stride[0])+1</span>
<span class="sd"> out_width = floor((width+2*padding[1]-dilation[1]*(kernel_size[1]-1)-1)/stride[1])+1</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">channels</span><span class="p">,</span> <span class="n">kernel_size</span><span class="p">,</span> <span class="n">strides</span><span class="o">=</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">),</span> <span class="n">padding</span><span class="o">=</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="mi">0</span><span class="p">),</span>
<span class="n">dilation</span><span class="o">=</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">),</span> <span class="n">groups</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">layout</span><span class="o">=</span><span class="s1">&#39;NCHW&#39;</span><span class="p">,</span>
<span class="n">activation</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">use_bias</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span> <span class="n">weight_initializer</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span>
<span class="n">bias_initializer</span><span class="o">=</span><span class="s1">&#39;zeros&#39;</span><span class="p">,</span> <span class="n">in_channels</span><span class="o">=</span><span class="mi">0</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span>
<span class="k">assert</span> <span class="n">layout</span> <span class="ow">in</span> <span class="p">(</span><span class="s1">&#39;NCHW&#39;</span><span class="p">,</span> <span class="s1">&#39;NHWC&#39;</span><span class="p">),</span> <span class="s2">&quot;Only supports &#39;NCHW&#39; and &#39;NHWC&#39; layout for now&quot;</span>
<span class="k">if</span> <span class="nb">isinstance</span><span class="p">(</span><span class="n">kernel_size</span><span class="p">,</span> <span class="n">numeric_types</span><span class="p">):</span>
<span class="n">kernel_size</span> <span class="o">=</span> <span class="p">(</span><span class="n">kernel_size</span><span class="p">,)</span><span class="o">*</span><span class="mi">2</span>
<span class="k">assert</span> <span class="nb">len</span><span class="p">(</span><span class="n">kernel_size</span><span class="p">)</span> <span class="o">==</span> <span class="mi">2</span><span class="p">,</span> <span class="s2">&quot;kernel_size must be a number or a list of 2 ints&quot;</span>
<span class="n">op_name</span> <span class="o">=</span> <span class="s1">&#39;convolution&#39;</span>
<span class="nb">super</span><span class="p">(</span><span class="n">Conv2D</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="fm">__init__</span><span class="p">(</span>
<span class="n">channels</span><span class="p">,</span> <span class="n">kernel_size</span><span class="p">,</span> <span class="n">strides</span><span class="p">,</span> <span class="n">padding</span><span class="p">,</span> <span class="n">dilation</span><span class="p">,</span> <span class="n">groups</span><span class="p">,</span> <span class="n">layout</span><span class="p">,</span>
<span class="n">in_channels</span><span class="p">,</span> <span class="n">activation</span><span class="p">,</span> <span class="n">use_bias</span><span class="p">,</span> <span class="n">weight_initializer</span><span class="p">,</span> <span class="n">bias_initializer</span><span class="p">,</span>
<span class="n">op_name</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">)</span></div>
<div class="viewcode-block" id="Conv3D"><a class="viewcode-back" href="../../../../api/gluon/nn/index.html#mxnet.gluon.nn.Conv3D">[docs]</a><span class="k">class</span> <span class="nc">Conv3D</span><span class="p">(</span><span class="n">_Conv</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;3D convolution layer (e.g. spatial convolution over volumes).</span>
<span class="sd"> This layer creates a convolution kernel that is convolved</span>
<span class="sd"> with the layer input to produce a tensor of</span>
<span class="sd"> outputs. If `use_bias` is `True`,</span>
<span class="sd"> a bias vector is created and added to the outputs. Finally, if</span>
<span class="sd"> `activation` is not `None`, it is applied to the outputs as well.</span>
<span class="sd"> If `in_channels` is not specified, `Parameter` initialization will be</span>
<span class="sd"> deferred to the first time `forward` is called and `in_channels` will be</span>
<span class="sd"> inferred from the shape of input data.</span>
<span class="sd"> Parameters</span>
<span class="sd"> ----------</span>
<span class="sd"> channels : int</span>
<span class="sd"> The dimensionality of the output space, i.e. the number of output</span>
<span class="sd"> channels (filters) in the convolution.</span>
<span class="sd"> kernel_size :int or tuple/list of 3 int</span>
<span class="sd"> Specifies the dimensions of the convolution window.</span>
<span class="sd"> strides : int or tuple/list of 3 int,</span>
<span class="sd"> Specify the strides of the convolution.</span>
<span class="sd"> padding : int or a tuple/list of 3 int,</span>
<span class="sd"> If padding is non-zero, then the input is implicitly zero-padded</span>
<span class="sd"> on both sides for padding number of points</span>
<span class="sd"> dilation : int or tuple/list of 3 int</span>
<span class="sd"> Specifies the dilation rate to use for dilated convolution.</span>
<span class="sd"> groups : int</span>
<span class="sd"> Controls the connections between inputs and outputs.</span>
<span class="sd"> At groups=1, all inputs are convolved to all outputs.</span>
<span class="sd"> At groups=2, the operation becomes equivalent to having two conv</span>
<span class="sd"> layers side by side, each seeing half the input channels, and producing</span>
<span class="sd"> half the output channels, and both subsequently concatenated.</span>
<span class="sd"> layout : str, default &#39;NCDHW&#39;</span>
<span class="sd"> Dimension ordering of data and weight. Only supports &#39;NCDHW&#39; and &#39;NDHWC&#39;</span>
<span class="sd"> layout for now. &#39;N&#39;, &#39;C&#39;, &#39;H&#39;, &#39;W&#39;, &#39;D&#39; stands for batch, channel, height,</span>
<span class="sd"> width and depth dimensions respectively. Convolution is applied on the &#39;D&#39;,</span>
<span class="sd"> &#39;H&#39; and &#39;W&#39; dimensions.</span>
<span class="sd"> in_channels : int, default 0</span>
<span class="sd"> The number of input channels to this layer. If not specified,</span>
<span class="sd"> initialization will be deferred to the first time `forward` is called</span>
<span class="sd"> and `in_channels` will be inferred from the shape of input data.</span>
<span class="sd"> activation : str</span>
<span class="sd"> Activation function to use. See :func:`~mxnet.npx.activation`.</span>
<span class="sd"> If you don&#39;t specify anything, no activation is applied</span>
<span class="sd"> (ie. &quot;linear&quot; activation: `a(x) = x`).</span>
<span class="sd"> use_bias : bool</span>
<span class="sd"> Whether the layer uses a bias vector.</span>
<span class="sd"> weight_initializer : str or `Initializer`</span>
<span class="sd"> Initializer for the `weight` weights matrix.</span>
<span class="sd"> bias_initializer : str or `Initializer`</span>
<span class="sd"> Initializer for the bias vector.</span>
<span class="sd"> Inputs:</span>
<span class="sd"> - **data**: 5D input tensor with shape</span>
<span class="sd"> `(batch_size, in_channels, depth, height, width)` when `layout` is `NCDHW`.</span>
<span class="sd"> For other layouts shape is permuted accordingly.</span>
<span class="sd"> Outputs:</span>
<span class="sd"> - **out**: 5D output tensor with shape</span>
<span class="sd"> `(batch_size, channels, out_depth, out_height, out_width)` when `layout` is `NCDHW`.</span>
<span class="sd"> out_depth, out_height and out_width are calculated as::</span>
<span class="sd"> out_depth = floor((depth+2*padding[0]-dilation[0]*(kernel_size[0]-1)-1)/stride[0])+1</span>
<span class="sd"> out_height = floor((height+2*padding[1]-dilation[1]*(kernel_size[1]-1)-1)/stride[1])+1</span>
<span class="sd"> out_width = floor((width+2*padding[2]-dilation[2]*(kernel_size[2]-1)-1)/stride[2])+1</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">channels</span><span class="p">,</span> <span class="n">kernel_size</span><span class="p">,</span> <span class="n">strides</span><span class="o">=</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">),</span> <span class="n">padding</span><span class="o">=</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">0</span><span class="p">),</span>
<span class="n">dilation</span><span class="o">=</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">),</span> <span class="n">groups</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">layout</span><span class="o">=</span><span class="s1">&#39;NCDHW&#39;</span><span class="p">,</span> <span class="n">activation</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span>
<span class="n">use_bias</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span> <span class="n">weight_initializer</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">bias_initializer</span><span class="o">=</span><span class="s1">&#39;zeros&#39;</span><span class="p">,</span>
<span class="n">in_channels</span><span class="o">=</span><span class="mi">0</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span>
<span class="k">assert</span> <span class="n">layout</span> <span class="ow">in</span> <span class="p">(</span><span class="s1">&#39;NCDHW&#39;</span><span class="p">,</span> <span class="s1">&#39;NDHWC&#39;</span><span class="p">),</span> <span class="s2">&quot;Only supports &#39;NCDHW&#39; and &#39;NDHWC&#39; layout for now&quot;</span>
<span class="k">if</span> <span class="nb">isinstance</span><span class="p">(</span><span class="n">kernel_size</span><span class="p">,</span> <span class="n">numeric_types</span><span class="p">):</span>
<span class="n">kernel_size</span> <span class="o">=</span> <span class="p">(</span><span class="n">kernel_size</span><span class="p">,)</span><span class="o">*</span><span class="mi">3</span>
<span class="k">assert</span> <span class="nb">len</span><span class="p">(</span><span class="n">kernel_size</span><span class="p">)</span> <span class="o">==</span> <span class="mi">3</span><span class="p">,</span> <span class="s2">&quot;kernel_size must be a number or a list of 3 ints&quot;</span>
<span class="n">op_name</span> <span class="o">=</span> <span class="s1">&#39;convolution&#39;</span>
<span class="nb">super</span><span class="p">(</span><span class="n">Conv3D</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="fm">__init__</span><span class="p">(</span>
<span class="n">channels</span><span class="p">,</span> <span class="n">kernel_size</span><span class="p">,</span> <span class="n">strides</span><span class="p">,</span> <span class="n">padding</span><span class="p">,</span> <span class="n">dilation</span><span class="p">,</span> <span class="n">groups</span><span class="p">,</span> <span class="n">layout</span><span class="p">,</span>
<span class="n">in_channels</span><span class="p">,</span> <span class="n">activation</span><span class="p">,</span> <span class="n">use_bias</span><span class="p">,</span> <span class="n">weight_initializer</span><span class="p">,</span> <span class="n">bias_initializer</span><span class="p">,</span>
<span class="n">op_name</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">)</span></div>
<div class="viewcode-block" id="Conv1DTranspose"><a class="viewcode-back" href="../../../../api/gluon/nn/index.html#mxnet.gluon.nn.Conv1DTranspose">[docs]</a><span class="k">class</span> <span class="nc">Conv1DTranspose</span><span class="p">(</span><span class="n">_Conv</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;Transposed 1D convolution layer (sometimes called Deconvolution).</span>
<span class="sd"> The need for transposed convolutions generally arises</span>
<span class="sd"> from the desire to use a transformation going in the opposite direction</span>
<span class="sd"> of a normal convolution, i.e., from something that has the shape of the</span>
<span class="sd"> output of some convolution to something that has the shape of its input</span>
<span class="sd"> while maintaining a connectivity pattern that is compatible with</span>
<span class="sd"> said convolution.</span>
<span class="sd"> If `in_channels` is not specified, `Parameter` initialization will be</span>
<span class="sd"> deferred to the first time `forward` is called and `in_channels` will be</span>
<span class="sd"> inferred from the shape of input data.</span>
<span class="sd"> Parameters</span>
<span class="sd"> ----------</span>
<span class="sd"> channels : int</span>
<span class="sd"> The dimensionality of the output space, i.e. the number of output</span>
<span class="sd"> channels (filters) in the convolution.</span>
<span class="sd"> kernel_size :int or tuple/list of 1 int</span>
<span class="sd"> Specifies the dimensions of the convolution window.</span>
<span class="sd"> strides : int or tuple/list of 1 int</span>
<span class="sd"> Specify the strides of the convolution.</span>
<span class="sd"> padding : int or a tuple/list of 1 int,</span>
<span class="sd"> If padding is non-zero, then the input is implicitly zero-padded</span>
<span class="sd"> on both sides for padding number of points</span>
<span class="sd"> output_padding: int or a tuple/list of 1 int</span>
<span class="sd"> Controls the amount of implicit zero-paddings on both sides of the</span>
<span class="sd"> output for output_padding number of points for each dimension.</span>
<span class="sd"> dilation : int or tuple/list of 1 int</span>
<span class="sd"> Controls the spacing between the kernel points; also known as the</span>
<span class="sd"> a trous algorithm</span>
<span class="sd"> groups : int</span>
<span class="sd"> Controls the connections between inputs and outputs.</span>
<span class="sd"> At groups=1, all inputs are convolved to all outputs.</span>
<span class="sd"> At groups=2, the operation becomes equivalent to having two conv</span>
<span class="sd"> layers side by side, each seeing half the input channels, and producing</span>
<span class="sd"> half the output channels, and both subsequently concatenated.</span>
<span class="sd"> layout : str, default &#39;NCW&#39;</span>
<span class="sd"> Dimension ordering of data and weight. Only supports &#39;NCW&#39; layout for now.</span>
<span class="sd"> &#39;N&#39;, &#39;C&#39;, &#39;W&#39; stands for batch, channel, and width (time) dimensions</span>
<span class="sd"> respectively. Convolution is applied on the &#39;W&#39; dimension.</span>
<span class="sd"> in_channels : int, default 0</span>
<span class="sd"> The number of input channels to this layer. If not specified,</span>
<span class="sd"> initialization will be deferred to the first time `forward` is called</span>
<span class="sd"> and `in_channels` will be inferred from the shape of input data.</span>
<span class="sd"> activation : str</span>
<span class="sd"> Activation function to use. See :func:`~mxnet.npx.activation`.</span>
<span class="sd"> If you don&#39;t specify anything, no activation is applied</span>
<span class="sd"> (ie. &quot;linear&quot; activation: `a(x) = x`).</span>
<span class="sd"> use_bias : bool</span>
<span class="sd"> Whether the layer uses a bias vector.</span>
<span class="sd"> weight_initializer : str or `Initializer`</span>
<span class="sd"> Initializer for the `weight` weights matrix.</span>
<span class="sd"> bias_initializer : str or `Initializer`</span>
<span class="sd"> Initializer for the bias vector.</span>
<span class="sd"> Inputs:</span>
<span class="sd"> - **data**: 3D input tensor with shape `(batch_size, in_channels, width)`</span>
<span class="sd"> when `layout` is `NCW`. For other layouts shape is permuted accordingly.</span>
<span class="sd"> Outputs:</span>
<span class="sd"> - **out**: 3D output tensor with shape `(batch_size, channels, out_width)`</span>
<span class="sd"> when `layout` is `NCW`. out_width is calculated as::</span>
<span class="sd"> out_width = (width-1)*strides-2*padding+kernel_size+output_padding</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">channels</span><span class="p">,</span> <span class="n">kernel_size</span><span class="p">,</span> <span class="n">strides</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">padding</span><span class="o">=</span><span class="mi">0</span><span class="p">,</span> <span class="n">output_padding</span><span class="o">=</span><span class="mi">0</span><span class="p">,</span>
<span class="n">dilation</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">groups</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">layout</span><span class="o">=</span><span class="s1">&#39;NCW&#39;</span><span class="p">,</span> <span class="n">activation</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">use_bias</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span>
<span class="n">weight_initializer</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">bias_initializer</span><span class="o">=</span><span class="s1">&#39;zeros&#39;</span><span class="p">,</span>
<span class="n">in_channels</span><span class="o">=</span><span class="mi">0</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span>
<span class="k">assert</span> <span class="n">layout</span> <span class="o">==</span> <span class="s1">&#39;NCW&#39;</span><span class="p">,</span> <span class="s2">&quot;Only supports &#39;NCW&#39; layout for now&quot;</span>
<span class="k">if</span> <span class="nb">isinstance</span><span class="p">(</span><span class="n">kernel_size</span><span class="p">,</span> <span class="n">numeric_types</span><span class="p">):</span>
<span class="n">kernel_size</span> <span class="o">=</span> <span class="p">(</span><span class="n">kernel_size</span><span class="p">,)</span>
<span class="k">if</span> <span class="nb">isinstance</span><span class="p">(</span><span class="n">output_padding</span><span class="p">,</span> <span class="n">numeric_types</span><span class="p">):</span>
<span class="n">output_padding</span> <span class="o">=</span> <span class="p">(</span><span class="n">output_padding</span><span class="p">,)</span>
<span class="k">assert</span> <span class="nb">len</span><span class="p">(</span><span class="n">kernel_size</span><span class="p">)</span> <span class="o">==</span> <span class="mi">1</span><span class="p">,</span> <span class="s2">&quot;kernel_size must be a number or a list of 1 ints&quot;</span>
<span class="k">assert</span> <span class="nb">len</span><span class="p">(</span><span class="n">output_padding</span><span class="p">)</span> <span class="o">==</span> <span class="mi">1</span><span class="p">,</span> <span class="s2">&quot;output_padding must be a number or a list of 1 ints&quot;</span>
<span class="n">op_name</span> <span class="o">=</span> <span class="s1">&#39;deconvolution&#39;</span>
<span class="nb">super</span><span class="p">(</span><span class="n">Conv1DTranspose</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="fm">__init__</span><span class="p">(</span>
<span class="n">channels</span><span class="p">,</span> <span class="n">kernel_size</span><span class="p">,</span> <span class="n">strides</span><span class="p">,</span> <span class="n">padding</span><span class="p">,</span> <span class="n">dilation</span><span class="p">,</span> <span class="n">groups</span><span class="p">,</span> <span class="n">layout</span><span class="p">,</span>
<span class="n">in_channels</span><span class="p">,</span> <span class="n">activation</span><span class="p">,</span> <span class="n">use_bias</span><span class="p">,</span> <span class="n">weight_initializer</span><span class="p">,</span>
<span class="n">bias_initializer</span><span class="p">,</span> <span class="n">op_name</span><span class="o">=</span><span class="n">op_name</span><span class="p">,</span> <span class="n">adj</span><span class="o">=</span><span class="n">output_padding</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">)</span>
<span class="bp">self</span><span class="o">.</span><span class="n">outpad</span> <span class="o">=</span> <span class="n">output_padding</span></div>
<div class="viewcode-block" id="Conv2DTranspose"><a class="viewcode-back" href="../../../../api/gluon/nn/index.html#mxnet.gluon.nn.Conv2DTranspose">[docs]</a><span class="k">class</span> <span class="nc">Conv2DTranspose</span><span class="p">(</span><span class="n">_Conv</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;Transposed 2D convolution layer (sometimes called Deconvolution).</span>
<span class="sd"> The need for transposed convolutions generally arises</span>
<span class="sd"> from the desire to use a transformation going in the opposite direction</span>
<span class="sd"> of a normal convolution, i.e., from something that has the shape of the</span>
<span class="sd"> output of some convolution to something that has the shape of its input</span>
<span class="sd"> while maintaining a connectivity pattern that is compatible with</span>
<span class="sd"> said convolution.</span>
<span class="sd"> If `in_channels` is not specified, `Parameter` initialization will be</span>
<span class="sd"> deferred to the first time `forward` is called and `in_channels` will be</span>
<span class="sd"> inferred from the shape of input data.</span>
<span class="sd"> Parameters</span>
<span class="sd"> ----------</span>
<span class="sd"> channels : int</span>
<span class="sd"> The dimensionality of the output space, i.e. the number of output</span>
<span class="sd"> channels (filters) in the convolution.</span>
<span class="sd"> kernel_size :int or tuple/list of 2 int</span>
<span class="sd"> Specifies the dimensions of the convolution window.</span>
<span class="sd"> strides : int or tuple/list of 2 int</span>
<span class="sd"> Specify the strides of the convolution.</span>
<span class="sd"> padding : int or a tuple/list of 2 int,</span>
<span class="sd"> If padding is non-zero, then the input is implicitly zero-padded</span>
<span class="sd"> on both sides for padding number of points</span>
<span class="sd"> output_padding: int or a tuple/list of 2 int</span>
<span class="sd"> Controls the amount of implicit zero-paddings on both sides of the</span>
<span class="sd"> output for output_padding number of points for each dimension.</span>
<span class="sd"> dilation : int or tuple/list of 2 int</span>
<span class="sd"> Controls the spacing between the kernel points; also known as the</span>
<span class="sd"> a trous algorithm</span>
<span class="sd"> groups : int</span>
<span class="sd"> Controls the connections between inputs and outputs.</span>
<span class="sd"> At groups=1, all inputs are convolved to all outputs.</span>
<span class="sd"> At groups=2, the operation becomes equivalent to having two conv</span>
<span class="sd"> layers side by side, each seeing half the input channels, and producing</span>
<span class="sd"> half the output channels, and both subsequently concatenated.</span>
<span class="sd"> layout : str, default &#39;NCHW&#39;</span>
<span class="sd"> Dimension ordering of data and weight. Only supports &#39;NCHW&#39; and &#39;NHWC&#39;</span>
<span class="sd"> layout for now. &#39;N&#39;, &#39;C&#39;, &#39;H&#39;, &#39;W&#39; stands for batch, channel, height,</span>
<span class="sd"> and width dimensions respectively. Convolution is applied on the &#39;H&#39; and</span>
<span class="sd"> &#39;W&#39; dimensions.</span>
<span class="sd"> in_channels : int, default 0</span>
<span class="sd"> The number of input channels to this layer. If not specified,</span>
<span class="sd"> initialization will be deferred to the first time `forward` is called</span>
<span class="sd"> and `in_channels` will be inferred from the shape of input data.</span>
<span class="sd"> activation : str</span>
<span class="sd"> Activation function to use. See :func:`~mxnet.npx.activation`.</span>
<span class="sd"> If you don&#39;t specify anything, no activation is applied</span>
<span class="sd"> (ie. &quot;linear&quot; activation: `a(x) = x`).</span>
<span class="sd"> use_bias : bool</span>
<span class="sd"> Whether the layer uses a bias vector.</span>
<span class="sd"> weight_initializer : str or `Initializer`</span>
<span class="sd"> Initializer for the `weight` weights matrix.</span>
<span class="sd"> bias_initializer : str or `Initializer`</span>
<span class="sd"> Initializer for the bias vector.</span>
<span class="sd"> Inputs:</span>
<span class="sd"> - **data**: 4D input tensor with shape</span>
<span class="sd"> `(batch_size, in_channels, height, width)` when `layout` is `NCHW`.</span>
<span class="sd"> For other layouts shape is permuted accordingly.</span>
<span class="sd"> Outputs:</span>
<span class="sd"> - **out**: 4D output tensor with shape</span>
<span class="sd"> `(batch_size, channels, out_height, out_width)` when `layout` is `NCHW`.</span>
<span class="sd"> out_height and out_width are calculated as::</span>
<span class="sd"> out_height = (height-1)*strides[0]-2*padding[0]+kernel_size[0]+output_padding[0]</span>
<span class="sd"> out_width = (width-1)*strides[1]-2*padding[1]+kernel_size[1]+output_padding[1]</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">channels</span><span class="p">,</span> <span class="n">kernel_size</span><span class="p">,</span> <span class="n">strides</span><span class="o">=</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">),</span> <span class="n">padding</span><span class="o">=</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="mi">0</span><span class="p">),</span>
<span class="n">output_padding</span><span class="o">=</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="mi">0</span><span class="p">),</span> <span class="n">dilation</span><span class="o">=</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">),</span> <span class="n">groups</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">layout</span><span class="o">=</span><span class="s1">&#39;NCHW&#39;</span><span class="p">,</span>
<span class="n">activation</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">use_bias</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span> <span class="n">weight_initializer</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span>
<span class="n">bias_initializer</span><span class="o">=</span><span class="s1">&#39;zeros&#39;</span><span class="p">,</span> <span class="n">in_channels</span><span class="o">=</span><span class="mi">0</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span>
<span class="k">assert</span> <span class="n">layout</span> <span class="ow">in</span> <span class="p">(</span><span class="s1">&#39;NCHW&#39;</span><span class="p">,</span> <span class="s1">&#39;NHWC&#39;</span><span class="p">),</span> <span class="s2">&quot;Only supports &#39;NCHW&#39; and &#39;NHWC&#39; layout for now&quot;</span>
<span class="k">if</span> <span class="nb">isinstance</span><span class="p">(</span><span class="n">kernel_size</span><span class="p">,</span> <span class="n">numeric_types</span><span class="p">):</span>
<span class="n">kernel_size</span> <span class="o">=</span> <span class="p">(</span><span class="n">kernel_size</span><span class="p">,)</span><span class="o">*</span><span class="mi">2</span>
<span class="k">if</span> <span class="nb">isinstance</span><span class="p">(</span><span class="n">output_padding</span><span class="p">,</span> <span class="n">numeric_types</span><span class="p">):</span>
<span class="n">output_padding</span> <span class="o">=</span> <span class="p">(</span><span class="n">output_padding</span><span class="p">,)</span><span class="o">*</span><span class="mi">2</span>
<span class="k">assert</span> <span class="nb">len</span><span class="p">(</span><span class="n">kernel_size</span><span class="p">)</span> <span class="o">==</span> <span class="mi">2</span><span class="p">,</span> <span class="s2">&quot;kernel_size must be a number or a list of 2 ints&quot;</span>
<span class="k">assert</span> <span class="nb">len</span><span class="p">(</span><span class="n">output_padding</span><span class="p">)</span> <span class="o">==</span> <span class="mi">2</span><span class="p">,</span> <span class="s2">&quot;output_padding must be a number or a list of 2 ints&quot;</span>
<span class="n">op_name</span> <span class="o">=</span> <span class="s1">&#39;deconvolution&#39;</span>
<span class="nb">super</span><span class="p">(</span><span class="n">Conv2DTranspose</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="fm">__init__</span><span class="p">(</span>
<span class="n">channels</span><span class="p">,</span> <span class="n">kernel_size</span><span class="p">,</span> <span class="n">strides</span><span class="p">,</span> <span class="n">padding</span><span class="p">,</span> <span class="n">dilation</span><span class="p">,</span> <span class="n">groups</span><span class="p">,</span> <span class="n">layout</span><span class="p">,</span>
<span class="n">in_channels</span><span class="p">,</span> <span class="n">activation</span><span class="p">,</span> <span class="n">use_bias</span><span class="p">,</span> <span class="n">weight_initializer</span><span class="p">,</span>
<span class="n">bias_initializer</span><span class="p">,</span> <span class="n">op_name</span><span class="o">=</span><span class="n">op_name</span><span class="p">,</span> <span class="n">adj</span><span class="o">=</span><span class="n">output_padding</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">)</span>
<span class="bp">self</span><span class="o">.</span><span class="n">outpad</span> <span class="o">=</span> <span class="n">output_padding</span></div>
<div class="viewcode-block" id="Conv3DTranspose"><a class="viewcode-back" href="../../../../api/gluon/nn/index.html#mxnet.gluon.nn.Conv3DTranspose">[docs]</a><span class="k">class</span> <span class="nc">Conv3DTranspose</span><span class="p">(</span><span class="n">_Conv</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;Transposed 3D convolution layer (sometimes called Deconvolution).</span>
<span class="sd"> The need for transposed convolutions generally arises</span>
<span class="sd"> from the desire to use a transformation going in the opposite direction</span>
<span class="sd"> of a normal convolution, i.e., from something that has the shape of the</span>
<span class="sd"> output of some convolution to something that has the shape of its input</span>
<span class="sd"> while maintaining a connectivity pattern that is compatible with</span>
<span class="sd"> said convolution.</span>
<span class="sd"> If `in_channels` is not specified, `Parameter` initialization will be</span>
<span class="sd"> deferred to the first time `forward` is called and `in_channels` will be</span>
<span class="sd"> inferred from the shape of input data.</span>
<span class="sd"> Parameters</span>
<span class="sd"> ----------</span>
<span class="sd"> channels : int</span>
<span class="sd"> The dimensionality of the output space, i.e. the number of output</span>
<span class="sd"> channels (filters) in the convolution.</span>
<span class="sd"> kernel_size :int or tuple/list of 3 int</span>
<span class="sd"> Specifies the dimensions of the convolution window.</span>
<span class="sd"> strides : int or tuple/list of 3 int</span>
<span class="sd"> Specify the strides of the convolution.</span>
<span class="sd"> padding : int or a tuple/list of 3 int,</span>
<span class="sd"> If padding is non-zero, then the input is implicitly zero-padded</span>
<span class="sd"> on both sides for padding number of points</span>
<span class="sd"> output_padding: int or a tuple/list of 3 int</span>
<span class="sd"> Controls the amount of implicit zero-paddings on both sides of the</span>
<span class="sd"> output for output_padding number of points for each dimension.</span>
<span class="sd"> dilation : int or tuple/list of 3 int</span>
<span class="sd"> Controls the spacing between the kernel points; also known as the</span>
<span class="sd"> a trous algorithm.</span>
<span class="sd"> groups : int</span>
<span class="sd"> Controls the connections between inputs and outputs.</span>
<span class="sd"> At groups=1, all inputs are convolved to all outputs.</span>
<span class="sd"> At groups=2, the operation becomes equivalent to having two conv</span>
<span class="sd"> layers side by side, each seeing half the input channels, and producing</span>
<span class="sd"> half the output channels, and both subsequently concatenated.</span>
<span class="sd"> layout : str, default &#39;NCDHW&#39;</span>
<span class="sd"> Dimension ordering of data and weight. Only supports &#39;NCDHW&#39; and &#39;NDHWC&#39;</span>
<span class="sd"> layout for now. &#39;N&#39;, &#39;C&#39;, &#39;H&#39;, &#39;W&#39;, &#39;D&#39; stands for batch, channel, height,</span>
<span class="sd"> width and depth dimensions respectively. Convolution is applied on the &#39;D&#39;,</span>
<span class="sd"> &#39;H&#39; and &#39;W&#39; dimensions.</span>
<span class="sd"> in_channels : int, default 0</span>
<span class="sd"> The number of input channels to this layer. If not specified,</span>
<span class="sd"> initialization will be deferred to the first time `forward` is called</span>
<span class="sd"> and `in_channels` will be inferred from the shape of input data.</span>
<span class="sd"> activation : str</span>
<span class="sd"> Activation function to use. See :func:`~mxnet.npx.activation`.</span>
<span class="sd"> If you don&#39;t specify anything, no activation is applied</span>
<span class="sd"> (ie. &quot;linear&quot; activation: `a(x) = x`).</span>
<span class="sd"> use_bias : bool</span>
<span class="sd"> Whether the layer uses a bias vector.</span>
<span class="sd"> weight_initializer : str or `Initializer`</span>
<span class="sd"> Initializer for the `weight` weights matrix.</span>
<span class="sd"> bias_initializer : str or `Initializer`</span>
<span class="sd"> Initializer for the bias vector.</span>
<span class="sd"> Inputs:</span>
<span class="sd"> - **data**: 5D input tensor with shape</span>
<span class="sd"> `(batch_size, in_channels, depth, height, width)` when `layout` is `NCDHW`.</span>
<span class="sd"> For other layouts shape is permuted accordingly.</span>
<span class="sd"> Outputs:</span>
<span class="sd"> - **out**: 5D output tensor with shape</span>
<span class="sd"> `(batch_size, channels, out_depth, out_height, out_width)` when `layout` is `NCDHW`.</span>
<span class="sd"> out_depth, out_height and out_width are calculated as::</span>
<span class="sd"> out_depth = (depth-1)*strides[0]-2*padding[0]+kernel_size[0]+output_padding[0]</span>
<span class="sd"> out_height = (height-1)*strides[1]-2*padding[1]+kernel_size[1]+output_padding[1]</span>
<span class="sd"> out_width = (width-1)*strides[2]-2*padding[2]+kernel_size[2]+output_padding[2]</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">channels</span><span class="p">,</span> <span class="n">kernel_size</span><span class="p">,</span> <span class="n">strides</span><span class="o">=</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">),</span> <span class="n">padding</span><span class="o">=</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">0</span><span class="p">),</span>
<span class="n">output_padding</span><span class="o">=</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">0</span><span class="p">),</span> <span class="n">dilation</span><span class="o">=</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">),</span> <span class="n">groups</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">layout</span><span class="o">=</span><span class="s1">&#39;NCDHW&#39;</span><span class="p">,</span>
<span class="n">activation</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">use_bias</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span> <span class="n">weight_initializer</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span>
<span class="n">bias_initializer</span><span class="o">=</span><span class="s1">&#39;zeros&#39;</span><span class="p">,</span> <span class="n">in_channels</span><span class="o">=</span><span class="mi">0</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span>
<span class="k">assert</span> <span class="n">layout</span> <span class="ow">in</span> <span class="p">(</span><span class="s1">&#39;NCDHW&#39;</span><span class="p">,</span> <span class="s1">&#39;NDHWC&#39;</span><span class="p">),</span> <span class="s2">&quot;Only supports &#39;NCDHW&#39; and &#39;NDHWC&#39; layout for now&quot;</span>
<span class="k">if</span> <span class="nb">isinstance</span><span class="p">(</span><span class="n">kernel_size</span><span class="p">,</span> <span class="n">numeric_types</span><span class="p">):</span>
<span class="n">kernel_size</span> <span class="o">=</span> <span class="p">(</span><span class="n">kernel_size</span><span class="p">,)</span><span class="o">*</span><span class="mi">3</span>
<span class="k">if</span> <span class="nb">isinstance</span><span class="p">(</span><span class="n">output_padding</span><span class="p">,</span> <span class="n">numeric_types</span><span class="p">):</span>
<span class="n">output_padding</span> <span class="o">=</span> <span class="p">(</span><span class="n">output_padding</span><span class="p">,)</span><span class="o">*</span><span class="mi">3</span>
<span class="k">assert</span> <span class="nb">len</span><span class="p">(</span><span class="n">kernel_size</span><span class="p">)</span> <span class="o">==</span> <span class="mi">3</span><span class="p">,</span> <span class="s2">&quot;kernel_size must be a number or a list of 3 ints&quot;</span>
<span class="k">assert</span> <span class="nb">len</span><span class="p">(</span><span class="n">output_padding</span><span class="p">)</span> <span class="o">==</span> <span class="mi">3</span><span class="p">,</span> <span class="s2">&quot;output_padding must be a number or a list of 3 ints&quot;</span>
<span class="n">op_name</span> <span class="o">=</span> <span class="s1">&#39;deconvolution&#39;</span>
<span class="nb">super</span><span class="p">(</span><span class="n">Conv3DTranspose</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="fm">__init__</span><span class="p">(</span>
<span class="n">channels</span><span class="p">,</span> <span class="n">kernel_size</span><span class="p">,</span> <span class="n">strides</span><span class="p">,</span> <span class="n">padding</span><span class="p">,</span> <span class="n">dilation</span><span class="p">,</span> <span class="n">groups</span><span class="p">,</span> <span class="n">layout</span><span class="p">,</span>
<span class="n">in_channels</span><span class="p">,</span> <span class="n">activation</span><span class="p">,</span> <span class="n">use_bias</span><span class="p">,</span> <span class="n">weight_initializer</span><span class="p">,</span> <span class="n">bias_initializer</span><span class="p">,</span>
<span class="n">op_name</span><span class="o">=</span><span class="n">op_name</span><span class="p">,</span> <span class="n">adj</span><span class="o">=</span><span class="n">output_padding</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">)</span>
<span class="bp">self</span><span class="o">.</span><span class="n">outpad</span> <span class="o">=</span> <span class="n">output_padding</span></div>
<span class="nd">@use_np</span>
<span class="k">class</span> <span class="nc">_Pooling</span><span class="p">(</span><span class="n">HybridBlock</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;Abstract class for different pooling layers.&quot;&quot;&quot;</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">pool_size</span><span class="p">,</span> <span class="n">strides</span><span class="p">,</span> <span class="n">padding</span><span class="p">,</span> <span class="n">ceil_mode</span><span class="p">,</span> <span class="n">global_pool</span><span class="p">,</span>
<span class="n">pool_type</span><span class="p">,</span> <span class="n">layout</span><span class="p">,</span> <span class="n">count_include_pad</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span>
<span class="nb">super</span><span class="p">(</span><span class="n">_Pooling</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="fm">__init__</span><span class="p">(</span><span class="o">**</span><span class="n">kwargs</span><span class="p">)</span>
<span class="k">if</span> <span class="n">strides</span> <span class="ow">is</span> <span class="kc">None</span><span class="p">:</span>
<span class="n">strides</span> <span class="o">=</span> <span class="n">pool_size</span>
<span class="k">if</span> <span class="nb">isinstance</span><span class="p">(</span><span class="n">strides</span><span class="p">,</span> <span class="n">numeric_types</span><span class="p">):</span>
<span class="n">strides</span> <span class="o">=</span> <span class="p">(</span><span class="n">strides</span><span class="p">,)</span><span class="o">*</span><span class="nb">len</span><span class="p">(</span><span class="n">pool_size</span><span class="p">)</span>
<span class="k">if</span> <span class="nb">isinstance</span><span class="p">(</span><span class="n">padding</span><span class="p">,</span> <span class="n">numeric_types</span><span class="p">):</span>
<span class="n">padding</span> <span class="o">=</span> <span class="p">(</span><span class="n">padding</span><span class="p">,)</span><span class="o">*</span><span class="nb">len</span><span class="p">(</span><span class="n">pool_size</span><span class="p">)</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_kwargs</span> <span class="o">=</span> <span class="p">{</span>
<span class="s1">&#39;kernel&#39;</span><span class="p">:</span> <span class="n">pool_size</span><span class="p">,</span> <span class="s1">&#39;stride&#39;</span><span class="p">:</span> <span class="n">strides</span><span class="p">,</span> <span class="s1">&#39;pad&#39;</span><span class="p">:</span> <span class="n">padding</span><span class="p">,</span>
<span class="s1">&#39;global_pool&#39;</span><span class="p">:</span> <span class="n">global_pool</span><span class="p">,</span> <span class="s1">&#39;pool_type&#39;</span><span class="p">:</span> <span class="n">pool_type</span><span class="p">,</span>
<span class="s1">&#39;layout&#39;</span><span class="p">:</span> <span class="n">layout</span><span class="p">,</span>
<span class="s1">&#39;pooling_convention&#39;</span><span class="p">:</span> <span class="s1">&#39;full&#39;</span> <span class="k">if</span> <span class="n">ceil_mode</span> <span class="k">else</span> <span class="s1">&#39;valid&#39;</span><span class="p">}</span>
<span class="k">if</span> <span class="n">count_include_pad</span> <span class="ow">is</span> <span class="ow">not</span> <span class="kc">None</span><span class="p">:</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_kwargs</span><span class="p">[</span><span class="s1">&#39;count_include_pad&#39;</span><span class="p">]</span> <span class="o">=</span> <span class="n">count_include_pad</span>
<span class="k">def</span> <span class="nf">_alias</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
<span class="k">return</span> <span class="s1">&#39;pool&#39;</span>
<span class="k">def</span> <span class="nf">forward</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">x</span><span class="p">):</span>
<span class="k">return</span> <span class="n">npx</span><span class="o">.</span><span class="n">pooling</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="s1">&#39;fwd&#39;</span><span class="p">,</span> <span class="o">**</span><span class="bp">self</span><span class="o">.</span><span class="n">_kwargs</span><span class="p">)</span>
<span class="k">def</span> <span class="fm">__repr__</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
<span class="n">s</span> <span class="o">=</span> <span class="s1">&#39;</span><span class="si">{name}</span><span class="s1">(size=</span><span class="si">{kernel}</span><span class="s1">, stride=</span><span class="si">{stride}</span><span class="s1">, padding=</span><span class="si">{pad}</span><span class="s1">, ceil_mode=</span><span class="si">{ceil_mode}</span><span class="s1">&#39;</span>
<span class="n">s</span> <span class="o">+=</span> <span class="s1">&#39;, global_pool=</span><span class="si">{global_pool}</span><span class="s1">, pool_type=</span><span class="si">{pool_type}</span><span class="s1">, layout=</span><span class="si">{layout}</span><span class="s1">)&#39;</span>
<span class="k">return</span> <span class="n">s</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="vm">__class__</span><span class="o">.</span><span class="vm">__name__</span><span class="p">,</span>
<span class="n">ceil_mode</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">_kwargs</span><span class="p">[</span><span class="s1">&#39;pooling_convention&#39;</span><span class="p">]</span> <span class="o">==</span> <span class="s1">&#39;full&#39;</span><span class="p">,</span>
<span class="o">**</span><span class="bp">self</span><span class="o">.</span><span class="n">_kwargs</span><span class="p">)</span>
<div class="viewcode-block" id="MaxPool1D"><a class="viewcode-back" href="../../../../api/gluon/nn/index.html#mxnet.gluon.nn.MaxPool1D">[docs]</a><span class="k">class</span> <span class="nc">MaxPool1D</span><span class="p">(</span><span class="n">_Pooling</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;Max pooling operation for one dimensional data.</span>
<span class="sd"> Parameters</span>
<span class="sd"> ----------</span>
<span class="sd"> pool_size: int</span>
<span class="sd"> Size of the max pooling windows.</span>
<span class="sd"> strides: int, or None</span>
<span class="sd"> Factor by which to downscale. E.g. 2 will halve the input size.</span>
<span class="sd"> If `None`, it will default to `pool_size`.</span>
<span class="sd"> padding: int</span>
<span class="sd"> If padding is non-zero, then the input is implicitly</span>
<span class="sd"> zero-padded on both sides for padding number of points.</span>
<span class="sd"> layout : str, default &#39;NCW&#39;</span>
<span class="sd"> Dimension ordering of data and out (&#39;NCW&#39; or &#39;NWC&#39;).</span>
<span class="sd"> &#39;N&#39;, &#39;C&#39;, &#39;W&#39; stands for batch, channel, and width (time) dimensions</span>
<span class="sd"> respectively. Pooling is applied on the W dimension.</span>
<span class="sd"> ceil_mode : bool, default False</span>
<span class="sd"> When `True`, will use ceil instead of floor to compute the output shape.</span>
<span class="sd"> Inputs:</span>
<span class="sd"> - **data**: 3D input tensor with shape `(batch_size, in_channels, width)`</span>
<span class="sd"> when `layout` is `NCW`. For other layouts shape is permuted accordingly.</span>
<span class="sd"> Outputs:</span>
<span class="sd"> - **out**: 3D output tensor with shape `(batch_size, channels, out_width)`</span>
<span class="sd"> when `layout` is `NCW`. out_width is calculated as::</span>
<span class="sd"> out_width = floor((width+2*padding-pool_size)/strides)+1</span>
<span class="sd"> When `ceil_mode` is `True`, ceil will be used instead of floor in this</span>
<span class="sd"> equation.</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">pool_size</span><span class="o">=</span><span class="mi">2</span><span class="p">,</span> <span class="n">strides</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">padding</span><span class="o">=</span><span class="mi">0</span><span class="p">,</span> <span class="n">layout</span><span class="o">=</span><span class="s1">&#39;NCW&#39;</span><span class="p">,</span>
<span class="n">ceil_mode</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span>
<span class="k">assert</span> <span class="n">layout</span> <span class="ow">in</span> <span class="p">(</span><span class="s1">&#39;NCW&#39;</span><span class="p">,</span> <span class="s1">&#39;NWC&#39;</span><span class="p">),</span>\
<span class="s2">&quot;Only NCW and NWC layouts are valid for 1D Pooling&quot;</span>
<span class="k">if</span> <span class="nb">isinstance</span><span class="p">(</span><span class="n">pool_size</span><span class="p">,</span> <span class="n">numeric_types</span><span class="p">):</span>
<span class="n">pool_size</span> <span class="o">=</span> <span class="p">(</span><span class="n">pool_size</span><span class="p">,)</span>
<span class="k">assert</span> <span class="nb">len</span><span class="p">(</span><span class="n">pool_size</span><span class="p">)</span> <span class="o">==</span> <span class="mi">1</span><span class="p">,</span> <span class="s2">&quot;pool_size must be a number or a list of 1 ints&quot;</span>
<span class="nb">super</span><span class="p">(</span><span class="n">MaxPool1D</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="fm">__init__</span><span class="p">(</span>
<span class="n">pool_size</span><span class="p">,</span> <span class="n">strides</span><span class="p">,</span> <span class="n">padding</span><span class="p">,</span> <span class="n">ceil_mode</span><span class="p">,</span> <span class="kc">False</span><span class="p">,</span> <span class="s1">&#39;max&#39;</span><span class="p">,</span> <span class="n">layout</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">)</span></div>
<div class="viewcode-block" id="MaxPool2D"><a class="viewcode-back" href="../../../../api/gluon/nn/index.html#mxnet.gluon.nn.MaxPool2D">[docs]</a><span class="k">class</span> <span class="nc">MaxPool2D</span><span class="p">(</span><span class="n">_Pooling</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;Max pooling operation for two dimensional (spatial) data.</span>
<span class="sd"> Parameters</span>
<span class="sd"> ----------</span>
<span class="sd"> pool_size: int or list/tuple of 2 ints,</span>
<span class="sd"> Size of the max pooling windows.</span>
<span class="sd"> strides: int, list/tuple of 2 ints, or None.</span>
<span class="sd"> Factor by which to downscale. E.g. 2 will halve the input size.</span>
<span class="sd"> If `None`, it will default to `pool_size`.</span>
<span class="sd"> padding: int or list/tuple of 2 ints,</span>
<span class="sd"> If padding is non-zero, then the input is implicitly</span>
<span class="sd"> zero-padded on both sides for padding number of points.</span>
<span class="sd"> layout : str, default &#39;NCHW&#39;</span>
<span class="sd"> Dimension ordering of data and out (&#39;NCHW&#39; or &#39;NHWC&#39;).</span>
<span class="sd"> &#39;N&#39;, &#39;C&#39;, &#39;H&#39;, &#39;W&#39; stands for batch, channel, height, and width</span>
<span class="sd"> dimensions respectively. padding is applied on &#39;H&#39; and &#39;W&#39; dimension.</span>
<span class="sd"> ceil_mode : bool, default False</span>
<span class="sd"> When `True`, will use ceil instead of floor to compute the output shape.</span>
<span class="sd"> Inputs:</span>
<span class="sd"> - **data**: 4D input tensor with shape</span>
<span class="sd"> `(batch_size, in_channels, height, width)` when `layout` is `NCHW`.</span>
<span class="sd"> For other layouts shape is permuted accordingly.</span>
<span class="sd"> Outputs:</span>
<span class="sd"> - **out**: 4D output tensor with shape</span>
<span class="sd"> `(batch_size, channels, out_height, out_width)` when `layout` is `NCHW`.</span>
<span class="sd"> out_height and out_width are calculated as::</span>
<span class="sd"> out_height = floor((height+2*padding[0]-pool_size[0])/strides[0])+1</span>
<span class="sd"> out_width = floor((width+2*padding[1]-pool_size[1])/strides[1])+1</span>
<span class="sd"> When `ceil_mode` is `True`, ceil will be used instead of floor in this</span>
<span class="sd"> equation.</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">pool_size</span><span class="o">=</span><span class="p">(</span><span class="mi">2</span><span class="p">,</span> <span class="mi">2</span><span class="p">),</span> <span class="n">strides</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">padding</span><span class="o">=</span><span class="mi">0</span><span class="p">,</span> <span class="n">layout</span><span class="o">=</span><span class="s1">&#39;NCHW&#39;</span><span class="p">,</span>
<span class="n">ceil_mode</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span>
<span class="k">assert</span> <span class="n">layout</span> <span class="ow">in</span> <span class="p">(</span><span class="s1">&#39;NCHW&#39;</span><span class="p">,</span> <span class="s1">&#39;NHWC&#39;</span><span class="p">),</span>\
<span class="s2">&quot;Only NCHW and NHWC layouts are valid for 2D Pooling&quot;</span>
<span class="k">if</span> <span class="nb">isinstance</span><span class="p">(</span><span class="n">pool_size</span><span class="p">,</span> <span class="n">numeric_types</span><span class="p">):</span>
<span class="n">pool_size</span> <span class="o">=</span> <span class="p">(</span><span class="n">pool_size</span><span class="p">,)</span><span class="o">*</span><span class="mi">2</span>
<span class="k">assert</span> <span class="nb">len</span><span class="p">(</span><span class="n">pool_size</span><span class="p">)</span> <span class="o">==</span> <span class="mi">2</span><span class="p">,</span> <span class="s2">&quot;pool_size must be a number or a list of 2 ints&quot;</span>
<span class="nb">super</span><span class="p">(</span><span class="n">MaxPool2D</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="fm">__init__</span><span class="p">(</span>
<span class="n">pool_size</span><span class="p">,</span> <span class="n">strides</span><span class="p">,</span> <span class="n">padding</span><span class="p">,</span> <span class="n">ceil_mode</span><span class="p">,</span> <span class="kc">False</span><span class="p">,</span> <span class="s1">&#39;max&#39;</span><span class="p">,</span> <span class="n">layout</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">)</span></div>
<div class="viewcode-block" id="MaxPool3D"><a class="viewcode-back" href="../../../../api/gluon/nn/index.html#mxnet.gluon.nn.MaxPool3D">[docs]</a><span class="k">class</span> <span class="nc">MaxPool3D</span><span class="p">(</span><span class="n">_Pooling</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;Max pooling operation for 3D data (spatial or spatio-temporal).</span>
<span class="sd"> Parameters</span>
<span class="sd"> ----------</span>
<span class="sd"> pool_size: int or list/tuple of 3 ints,</span>
<span class="sd"> Size of the max pooling windows.</span>
<span class="sd"> strides: int, list/tuple of 3 ints, or None.</span>
<span class="sd"> Factor by which to downscale. E.g. 2 will halve the input size.</span>
<span class="sd"> If `None`, it will default to `pool_size`.</span>
<span class="sd"> padding: int or list/tuple of 3 ints,</span>
<span class="sd"> If padding is non-zero, then the input is implicitly</span>
<span class="sd"> zero-padded on both sides for padding number of points.</span>
<span class="sd"> layout : str, default &#39;NCDHW&#39;</span>
<span class="sd"> Dimension ordering of data and out (&#39;NCDHW&#39; or &#39;NDHWC&#39;).</span>
<span class="sd"> &#39;N&#39;, &#39;C&#39;, &#39;H&#39;, &#39;W&#39;, &#39;D&#39; stands for batch, channel, height, width and</span>
<span class="sd"> depth dimensions respectively. padding is applied on &#39;D&#39;, &#39;H&#39; and &#39;W&#39;</span>
<span class="sd"> dimension.</span>
<span class="sd"> ceil_mode : bool, default False</span>
<span class="sd"> When `True`, will use ceil instead of floor to compute the output shape.</span>
<span class="sd"> Inputs:</span>
<span class="sd"> - **data**: 5D input tensor with shape</span>
<span class="sd"> `(batch_size, in_channels, depth, height, width)` when `layout` is `NCW`.</span>
<span class="sd"> For other layouts shape is permuted accordingly.</span>
<span class="sd"> Outputs:</span>
<span class="sd"> - **out**: 5D output tensor with shape</span>
<span class="sd"> `(batch_size, channels, out_depth, out_height, out_width)` when `layout` is `NCDHW`.</span>
<span class="sd"> out_depth, out_height and out_width are calculated as::</span>
<span class="sd"> out_depth = floor((depth+2*padding[0]-pool_size[0])/strides[0])+1</span>
<span class="sd"> out_height = floor((height+2*padding[1]-pool_size[1])/strides[1])+1</span>
<span class="sd"> out_width = floor((width+2*padding[2]-pool_size[2])/strides[2])+1</span>
<span class="sd"> When `ceil_mode` is `True`, ceil will be used instead of floor in this</span>
<span class="sd"> equation.</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">pool_size</span><span class="o">=</span><span class="p">(</span><span class="mi">2</span><span class="p">,</span> <span class="mi">2</span><span class="p">,</span> <span class="mi">2</span><span class="p">),</span> <span class="n">strides</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">padding</span><span class="o">=</span><span class="mi">0</span><span class="p">,</span>
<span class="n">ceil_mode</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span> <span class="n">layout</span><span class="o">=</span><span class="s1">&#39;NCDHW&#39;</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span>
<span class="k">assert</span> <span class="n">layout</span> <span class="ow">in</span> <span class="p">(</span><span class="s1">&#39;NCDHW&#39;</span><span class="p">,</span> <span class="s1">&#39;NDHWC&#39;</span><span class="p">),</span>\
<span class="s2">&quot;Only NCDHW and NDHWC layouts are valid for 3D Pooling&quot;</span>
<span class="k">if</span> <span class="nb">isinstance</span><span class="p">(</span><span class="n">pool_size</span><span class="p">,</span> <span class="n">numeric_types</span><span class="p">):</span>
<span class="n">pool_size</span> <span class="o">=</span> <span class="p">(</span><span class="n">pool_size</span><span class="p">,)</span><span class="o">*</span><span class="mi">3</span>
<span class="k">assert</span> <span class="nb">len</span><span class="p">(</span><span class="n">pool_size</span><span class="p">)</span> <span class="o">==</span> <span class="mi">3</span><span class="p">,</span> <span class="s2">&quot;pool_size must be a number or a list of 3 ints&quot;</span>
<span class="nb">super</span><span class="p">(</span><span class="n">MaxPool3D</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="fm">__init__</span><span class="p">(</span>
<span class="n">pool_size</span><span class="p">,</span> <span class="n">strides</span><span class="p">,</span> <span class="n">padding</span><span class="p">,</span> <span class="n">ceil_mode</span><span class="p">,</span> <span class="kc">False</span><span class="p">,</span> <span class="s1">&#39;max&#39;</span><span class="p">,</span> <span class="n">layout</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">)</span></div>
<div class="viewcode-block" id="AvgPool1D"><a class="viewcode-back" href="../../../../api/gluon/nn/index.html#mxnet.gluon.nn.AvgPool1D">[docs]</a><span class="k">class</span> <span class="nc">AvgPool1D</span><span class="p">(</span><span class="n">_Pooling</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;Average pooling operation for temporal data.</span>
<span class="sd"> Parameters</span>
<span class="sd"> ----------</span>
<span class="sd"> pool_size: int</span>
<span class="sd"> Size of the average pooling windows.</span>
<span class="sd"> strides: int, or None</span>
<span class="sd"> Factor by which to downscale. E.g. 2 will halve the input size.</span>
<span class="sd"> If `None`, it will default to `pool_size`.</span>
<span class="sd"> padding: int</span>
<span class="sd"> If padding is non-zero, then the input is implicitly</span>
<span class="sd"> zero-padded on both sides for padding number of points.</span>
<span class="sd"> layout : str, default &#39;NCW&#39;</span>
<span class="sd"> Dimension ordering of data and out (&#39;NCW&#39; or &#39;NWC&#39;).</span>
<span class="sd"> &#39;N&#39;, &#39;C&#39;, &#39;W&#39; stands for batch, channel, and width (time) dimensions</span>
<span class="sd"> respectively. padding is applied on &#39;W&#39; dimension.</span>
<span class="sd"> ceil_mode : bool, default False</span>
<span class="sd"> When `True`, will use ceil instead of floor to compute the output shape.</span>
<span class="sd"> count_include_pad : bool, default True</span>
<span class="sd"> When &#39;False&#39;, will exclude padding elements when computing the average value.</span>
<span class="sd"> Inputs:</span>
<span class="sd"> - **data**: 3D input tensor with shape `(batch_size, in_channels, width)`</span>
<span class="sd"> when `layout` is `NCW`. For other layouts shape is permuted accordingly.</span>
<span class="sd"> Outputs:</span>
<span class="sd"> - **out**: 3D output tensor with shape `(batch_size, channels, out_width)`</span>
<span class="sd"> when `layout` is `NCW`. out_width is calculated as::</span>
<span class="sd"> out_width = floor((width+2*padding-pool_size)/strides)+1</span>
<span class="sd"> When `ceil_mode` is `True`, ceil will be used instead of floor in this</span>
<span class="sd"> equation.</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">pool_size</span><span class="o">=</span><span class="mi">2</span><span class="p">,</span> <span class="n">strides</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">padding</span><span class="o">=</span><span class="mi">0</span><span class="p">,</span> <span class="n">layout</span><span class="o">=</span><span class="s1">&#39;NCW&#39;</span><span class="p">,</span>
<span class="n">ceil_mode</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span> <span class="n">count_include_pad</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span>
<span class="k">assert</span> <span class="n">layout</span> <span class="ow">in</span> <span class="p">(</span><span class="s1">&#39;NCW&#39;</span><span class="p">,</span> <span class="s1">&#39;NWC&#39;</span><span class="p">),</span>\
<span class="s2">&quot;Only NCW and NWC layouts are valid for 1D Pooling&quot;</span>
<span class="k">if</span> <span class="nb">isinstance</span><span class="p">(</span><span class="n">pool_size</span><span class="p">,</span> <span class="n">numeric_types</span><span class="p">):</span>
<span class="n">pool_size</span> <span class="o">=</span> <span class="p">(</span><span class="n">pool_size</span><span class="p">,)</span>
<span class="k">assert</span> <span class="nb">len</span><span class="p">(</span><span class="n">pool_size</span><span class="p">)</span> <span class="o">==</span> <span class="mi">1</span><span class="p">,</span> <span class="s2">&quot;pool_size must be a number or a list of 1 ints&quot;</span>
<span class="nb">super</span><span class="p">(</span><span class="n">AvgPool1D</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="fm">__init__</span><span class="p">(</span>
<span class="n">pool_size</span><span class="p">,</span> <span class="n">strides</span><span class="p">,</span> <span class="n">padding</span><span class="p">,</span> <span class="n">ceil_mode</span><span class="p">,</span> <span class="kc">False</span><span class="p">,</span> <span class="s1">&#39;avg&#39;</span><span class="p">,</span> <span class="n">layout</span><span class="p">,</span> <span class="n">count_include_pad</span><span class="p">,</span>
<span class="o">**</span><span class="n">kwargs</span><span class="p">)</span></div>
<div class="viewcode-block" id="AvgPool2D"><a class="viewcode-back" href="../../../../api/gluon/nn/index.html#mxnet.gluon.nn.AvgPool2D">[docs]</a><span class="k">class</span> <span class="nc">AvgPool2D</span><span class="p">(</span><span class="n">_Pooling</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;Average pooling operation for spatial data.</span>
<span class="sd"> Parameters</span>
<span class="sd"> ----------</span>
<span class="sd"> pool_size: int or list/tuple of 2 ints,</span>
<span class="sd"> Size of the average pooling windows.</span>
<span class="sd"> strides: int, list/tuple of 2 ints, or None.</span>
<span class="sd"> Factor by which to downscale. E.g. 2 will halve the input size.</span>
<span class="sd"> If `None`, it will default to `pool_size`.</span>
<span class="sd"> padding: int or list/tuple of 2 ints,</span>
<span class="sd"> If padding is non-zero, then the input is implicitly</span>
<span class="sd"> zero-padded on both sides for padding number of points.</span>
<span class="sd"> layout : str, default &#39;NCHW&#39;</span>
<span class="sd"> Dimension ordering of data and out (&#39;NCHW&#39; or &#39;NHWC&#39;).</span>
<span class="sd"> &#39;N&#39;, &#39;C&#39;, &#39;H&#39;, &#39;W&#39; stands for batch, channel, height, and width</span>
<span class="sd"> dimensions respectively. padding is applied on &#39;H&#39; and &#39;W&#39; dimension.</span>
<span class="sd"> ceil_mode : bool, default False</span>
<span class="sd"> When True, will use ceil instead of floor to compute the output shape.</span>
<span class="sd"> count_include_pad : bool, default True</span>
<span class="sd"> When &#39;False&#39;, will exclude padding elements when computing the average value.</span>
<span class="sd"> Inputs:</span>
<span class="sd"> - **data**: 4D input tensor with shape</span>
<span class="sd"> `(batch_size, in_channels, height, width)` when `layout` is `NCHW`.</span>
<span class="sd"> For other layouts shape is permuted accordingly.</span>
<span class="sd"> Outputs:</span>
<span class="sd"> - **out**: 4D output tensor with shape</span>
<span class="sd"> `(batch_size, channels, out_height, out_width)` when `layout` is `NCHW`.</span>
<span class="sd"> out_height and out_width are calculated as::</span>
<span class="sd"> out_height = floor((height+2*padding[0]-pool_size[0])/strides[0])+1</span>
<span class="sd"> out_width = floor((width+2*padding[1]-pool_size[1])/strides[1])+1</span>
<span class="sd"> When `ceil_mode` is `True`, ceil will be used instead of floor in this</span>
<span class="sd"> equation.</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">pool_size</span><span class="o">=</span><span class="p">(</span><span class="mi">2</span><span class="p">,</span> <span class="mi">2</span><span class="p">),</span> <span class="n">strides</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">padding</span><span class="o">=</span><span class="mi">0</span><span class="p">,</span>
<span class="n">ceil_mode</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span> <span class="n">layout</span><span class="o">=</span><span class="s1">&#39;NCHW&#39;</span><span class="p">,</span> <span class="n">count_include_pad</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span>
<span class="k">assert</span> <span class="n">layout</span> <span class="ow">in</span> <span class="p">(</span><span class="s1">&#39;NCHW&#39;</span><span class="p">,</span> <span class="s1">&#39;NHWC&#39;</span><span class="p">),</span>\
<span class="s2">&quot;Only NCHW and NHWC layouts are valid for 2D Pooling&quot;</span>
<span class="k">if</span> <span class="nb">isinstance</span><span class="p">(</span><span class="n">pool_size</span><span class="p">,</span> <span class="n">numeric_types</span><span class="p">):</span>
<span class="n">pool_size</span> <span class="o">=</span> <span class="p">(</span><span class="n">pool_size</span><span class="p">,)</span><span class="o">*</span><span class="mi">2</span>
<span class="k">assert</span> <span class="nb">len</span><span class="p">(</span><span class="n">pool_size</span><span class="p">)</span> <span class="o">==</span> <span class="mi">2</span><span class="p">,</span> <span class="s2">&quot;pool_size must be a number or a list of 2 ints&quot;</span>
<span class="nb">super</span><span class="p">(</span><span class="n">AvgPool2D</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="fm">__init__</span><span class="p">(</span>
<span class="n">pool_size</span><span class="p">,</span> <span class="n">strides</span><span class="p">,</span> <span class="n">padding</span><span class="p">,</span> <span class="n">ceil_mode</span><span class="p">,</span> <span class="kc">False</span><span class="p">,</span> <span class="s1">&#39;avg&#39;</span><span class="p">,</span> <span class="n">layout</span><span class="p">,</span> <span class="n">count_include_pad</span><span class="p">,</span>
<span class="o">**</span><span class="n">kwargs</span><span class="p">)</span></div>
<div class="viewcode-block" id="AvgPool3D"><a class="viewcode-back" href="../../../../api/gluon/nn/index.html#mxnet.gluon.nn.AvgPool3D">[docs]</a><span class="k">class</span> <span class="nc">AvgPool3D</span><span class="p">(</span><span class="n">_Pooling</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;Average pooling operation for 3D data (spatial or spatio-temporal).</span>
<span class="sd"> Parameters</span>
<span class="sd"> ----------</span>
<span class="sd"> pool_size: int or list/tuple of 3 ints,</span>
<span class="sd"> Size of the average pooling windows.</span>
<span class="sd"> strides: int, list/tuple of 3 ints, or None.</span>
<span class="sd"> Factor by which to downscale. E.g. 2 will halve the input size.</span>
<span class="sd"> If `None`, it will default to `pool_size`.</span>
<span class="sd"> padding: int or list/tuple of 3 ints,</span>
<span class="sd"> If padding is non-zero, then the input is implicitly</span>
<span class="sd"> zero-padded on both sides for padding number of points.</span>
<span class="sd"> layout : str, default &#39;NCDHW&#39;</span>
<span class="sd"> Dimension ordering of data and out (&#39;NCDHW&#39; or &#39;NDHWC&#39;).</span>
<span class="sd"> &#39;N&#39;, &#39;C&#39;, &#39;H&#39;, &#39;W&#39;, &#39;D&#39; stands for batch, channel, height, width and</span>
<span class="sd"> depth dimensions respectively. padding is applied on &#39;D&#39;, &#39;H&#39; and &#39;W&#39;</span>
<span class="sd"> dimension.</span>
<span class="sd"> ceil_mode : bool, default False</span>
<span class="sd"> When True, will use ceil instead of floor to compute the output shape.</span>
<span class="sd"> count_include_pad : bool, default True</span>
<span class="sd"> When &#39;False&#39;, will exclude padding elements when computing the average value.</span>
<span class="sd"> Inputs:</span>
<span class="sd"> - **data**: 5D input tensor with shape</span>
<span class="sd"> `(batch_size, in_channels, depth, height, width)` when `layout` is `NCDHW`.</span>
<span class="sd"> For other layouts shape is permuted accordingly.</span>
<span class="sd"> Outputs:</span>
<span class="sd"> - **out**: 5D output tensor with shape</span>
<span class="sd"> `(batch_size, channels, out_depth, out_height, out_width)` when `layout` is `NCDHW`.</span>
<span class="sd"> out_depth, out_height and out_width are calculated as::</span>
<span class="sd"> out_depth = floor((depth+2*padding[0]-pool_size[0])/strides[0])+1</span>
<span class="sd"> out_height = floor((height+2*padding[1]-pool_size[1])/strides[1])+1</span>
<span class="sd"> out_width = floor((width+2*padding[2]-pool_size[2])/strides[2])+1</span>
<span class="sd"> When `ceil_mode` is `True,` ceil will be used instead of floor in this</span>
<span class="sd"> equation.</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">pool_size</span><span class="o">=</span><span class="p">(</span><span class="mi">2</span><span class="p">,</span> <span class="mi">2</span><span class="p">,</span> <span class="mi">2</span><span class="p">),</span> <span class="n">strides</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">padding</span><span class="o">=</span><span class="mi">0</span><span class="p">,</span>
<span class="n">ceil_mode</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span> <span class="n">layout</span><span class="o">=</span><span class="s1">&#39;NCDHW&#39;</span><span class="p">,</span> <span class="n">count_include_pad</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span>
<span class="k">assert</span> <span class="n">layout</span> <span class="ow">in</span> <span class="p">(</span><span class="s1">&#39;NCDHW&#39;</span><span class="p">,</span> <span class="s1">&#39;NDHWC&#39;</span><span class="p">),</span>\
<span class="s2">&quot;Only NCDHW and NDHWC layouts are valid for 3D Pooling&quot;</span>
<span class="k">if</span> <span class="nb">isinstance</span><span class="p">(</span><span class="n">pool_size</span><span class="p">,</span> <span class="n">numeric_types</span><span class="p">):</span>
<span class="n">pool_size</span> <span class="o">=</span> <span class="p">(</span><span class="n">pool_size</span><span class="p">,)</span><span class="o">*</span><span class="mi">3</span>
<span class="k">assert</span> <span class="nb">len</span><span class="p">(</span><span class="n">pool_size</span><span class="p">)</span> <span class="o">==</span> <span class="mi">3</span><span class="p">,</span> <span class="s2">&quot;pool_size must be a number or a list of 3 ints&quot;</span>
<span class="nb">super</span><span class="p">(</span><span class="n">AvgPool3D</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="fm">__init__</span><span class="p">(</span>
<span class="n">pool_size</span><span class="p">,</span> <span class="n">strides</span><span class="p">,</span> <span class="n">padding</span><span class="p">,</span> <span class="n">ceil_mode</span><span class="p">,</span> <span class="kc">False</span><span class="p">,</span> <span class="s1">&#39;avg&#39;</span><span class="p">,</span> <span class="n">layout</span><span class="p">,</span> <span class="n">count_include_pad</span><span class="p">,</span>
<span class="o">**</span><span class="n">kwargs</span><span class="p">)</span></div>
<div class="viewcode-block" id="GlobalMaxPool1D"><a class="viewcode-back" href="../../../../api/gluon/nn/index.html#mxnet.gluon.nn.GlobalMaxPool1D">[docs]</a><span class="k">class</span> <span class="nc">GlobalMaxPool1D</span><span class="p">(</span><span class="n">_Pooling</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;Gloabl max pooling operation for one dimensional (temporal) data.</span>
<span class="sd"> Parameters</span>
<span class="sd"> ----------</span>
<span class="sd"> layout : str, default &#39;NCW&#39;</span>
<span class="sd"> Dimension ordering of data and out (&#39;NCW&#39; or &#39;NWC&#39;).</span>
<span class="sd"> &#39;N&#39;, &#39;C&#39;, &#39;W&#39; stands for batch, channel, and width (time) dimensions</span>
<span class="sd"> respectively. Pooling is applied on the W dimension.</span>
<span class="sd"> Inputs:</span>
<span class="sd"> - **data**: 3D input tensor with shape `(batch_size, in_channels, width)`</span>
<span class="sd"> when `layout` is `NCW`. For other layouts shape is permuted accordingly.</span>
<span class="sd"> Outputs:</span>
<span class="sd"> - **out**: 3D output tensor with shape `(batch_size, channels, 1)`</span>
<span class="sd"> when `layout` is `NCW`.</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">layout</span><span class="o">=</span><span class="s1">&#39;NCW&#39;</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span>
<span class="k">assert</span> <span class="n">layout</span> <span class="ow">in</span> <span class="p">(</span><span class="s1">&#39;NCW&#39;</span><span class="p">,</span> <span class="s1">&#39;NWC&#39;</span><span class="p">),</span>\
<span class="s2">&quot;Only NCW and NWC layouts are valid for 1D Pooling&quot;</span>
<span class="nb">super</span><span class="p">(</span><span class="n">GlobalMaxPool1D</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="fm">__init__</span><span class="p">(</span>
<span class="p">(</span><span class="mi">1</span><span class="p">,),</span> <span class="kc">None</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="kc">True</span><span class="p">,</span> <span class="kc">True</span><span class="p">,</span> <span class="s1">&#39;max&#39;</span><span class="p">,</span> <span class="n">layout</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">)</span></div>
<div class="viewcode-block" id="GlobalMaxPool2D"><a class="viewcode-back" href="../../../../api/gluon/nn/index.html#mxnet.gluon.nn.GlobalMaxPool2D">[docs]</a><span class="k">class</span> <span class="nc">GlobalMaxPool2D</span><span class="p">(</span><span class="n">_Pooling</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;Global max pooling operation for two dimensional (spatial) data.</span>
<span class="sd"> Parameters</span>
<span class="sd"> ----------</span>
<span class="sd"> layout : str, default &#39;NCHW&#39;</span>
<span class="sd"> Dimension ordering of data and out (&#39;NCHW&#39; or &#39;NHWC&#39;).</span>
<span class="sd"> &#39;N&#39;, &#39;C&#39;, &#39;H&#39;, &#39;W&#39; stands for batch, channel, height, and width</span>
<span class="sd"> dimensions respectively. padding is applied on &#39;H&#39; and &#39;W&#39; dimension.</span>
<span class="sd"> Inputs:</span>
<span class="sd"> - **data**: 4D input tensor with shape</span>
<span class="sd"> `(batch_size, in_channels, height, width)` when `layout` is `NCHW`.</span>
<span class="sd"> For other layouts shape is permuted accordingly.</span>
<span class="sd"> Outputs:</span>
<span class="sd"> - **out**: 4D output tensor with shape</span>
<span class="sd"> `(batch_size, channels, 1, 1)` when `layout` is `NCHW`.</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">layout</span><span class="o">=</span><span class="s1">&#39;NCHW&#39;</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span>
<span class="k">assert</span> <span class="n">layout</span> <span class="ow">in</span> <span class="p">(</span><span class="s1">&#39;NCHW&#39;</span><span class="p">,</span> <span class="s1">&#39;NHWC&#39;</span><span class="p">),</span>\
<span class="s2">&quot;Only NCHW and NHWC layouts are valid for 2D Pooling&quot;</span>
<span class="nb">super</span><span class="p">(</span><span class="n">GlobalMaxPool2D</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="fm">__init__</span><span class="p">(</span>
<span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">),</span> <span class="kc">None</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="kc">True</span><span class="p">,</span> <span class="kc">True</span><span class="p">,</span> <span class="s1">&#39;max&#39;</span><span class="p">,</span> <span class="n">layout</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">)</span></div>
<div class="viewcode-block" id="GlobalMaxPool3D"><a class="viewcode-back" href="../../../../api/gluon/nn/index.html#mxnet.gluon.nn.GlobalMaxPool3D">[docs]</a><span class="k">class</span> <span class="nc">GlobalMaxPool3D</span><span class="p">(</span><span class="n">_Pooling</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;Global max pooling operation for 3D data (spatial or spatio-temporal).</span>
<span class="sd"> Parameters</span>
<span class="sd"> ----------</span>
<span class="sd"> layout : str, default &#39;NCDHW&#39;</span>
<span class="sd"> Dimension ordering of data and out (&#39;NCDHW&#39; or &#39;NDHWC&#39;).</span>
<span class="sd"> &#39;N&#39;, &#39;C&#39;, &#39;H&#39;, &#39;W&#39;, &#39;D&#39; stands for batch, channel, height, width and</span>
<span class="sd"> depth dimensions respectively. padding is applied on &#39;D&#39;, &#39;H&#39; and &#39;W&#39;</span>
<span class="sd"> dimension.</span>
<span class="sd"> Inputs:</span>
<span class="sd"> - **data**: 5D input tensor with shape</span>
<span class="sd"> `(batch_size, in_channels, depth, height, width)` when `layout` is `NCW`.</span>
<span class="sd"> For other layouts shape is permuted accordingly.</span>
<span class="sd"> Outputs:</span>
<span class="sd"> - **out**: 5D output tensor with shape</span>
<span class="sd"> `(batch_size, channels, 1, 1, 1)` when `layout` is `NCDHW`.</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">layout</span><span class="o">=</span><span class="s1">&#39;NCDHW&#39;</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span>
<span class="k">assert</span> <span class="n">layout</span> <span class="ow">in</span> <span class="p">(</span><span class="s1">&#39;NCDHW&#39;</span><span class="p">,</span> <span class="s1">&#39;NDHWC&#39;</span><span class="p">),</span>\
<span class="s2">&quot;Only NCDHW and NDHWC layouts are valid for 3D Pooling&quot;</span>
<span class="nb">super</span><span class="p">(</span><span class="n">GlobalMaxPool3D</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="fm">__init__</span><span class="p">(</span>
<span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">),</span> <span class="kc">None</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="kc">True</span><span class="p">,</span> <span class="kc">True</span><span class="p">,</span> <span class="s1">&#39;max&#39;</span><span class="p">,</span> <span class="n">layout</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">)</span></div>
<div class="viewcode-block" id="GlobalAvgPool1D"><a class="viewcode-back" href="../../../../api/gluon/nn/index.html#mxnet.gluon.nn.GlobalAvgPool1D">[docs]</a><span class="k">class</span> <span class="nc">GlobalAvgPool1D</span><span class="p">(</span><span class="n">_Pooling</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;Global average pooling operation for temporal data.</span>
<span class="sd"> Parameters</span>
<span class="sd"> ----------</span>
<span class="sd"> layout : str, default &#39;NCW&#39;</span>
<span class="sd"> Dimension ordering of data and out (&#39;NCW&#39; or &#39;NWC&#39;).</span>
<span class="sd"> &#39;N&#39;, &#39;C&#39;, &#39;W&#39; stands for batch, channel, and width (time) dimensions</span>
<span class="sd"> respectively. padding is applied on &#39;W&#39; dimension.</span>
<span class="sd"> Inputs:</span>
<span class="sd"> - **data**: 3D input tensor with shape `(batch_size, in_channels, width)`</span>
<span class="sd"> when `layout` is `NCW`. For other layouts shape is permuted accordingly.</span>
<span class="sd"> Outputs:</span>
<span class="sd"> - **out**: 3D output tensor with shape `(batch_size, channels, 1)`.</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">layout</span><span class="o">=</span><span class="s1">&#39;NCW&#39;</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span>
<span class="k">assert</span> <span class="n">layout</span> <span class="ow">in</span> <span class="p">(</span><span class="s1">&#39;NCW&#39;</span><span class="p">,</span> <span class="s1">&#39;NWC&#39;</span><span class="p">),</span>\
<span class="s2">&quot;Only NCW and NWC layouts are valid for 1D Pooling&quot;</span>
<span class="nb">super</span><span class="p">(</span><span class="n">GlobalAvgPool1D</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="fm">__init__</span><span class="p">(</span>
<span class="p">(</span><span class="mi">1</span><span class="p">,),</span> <span class="kc">None</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="kc">True</span><span class="p">,</span> <span class="kc">True</span><span class="p">,</span> <span class="s1">&#39;avg&#39;</span><span class="p">,</span> <span class="n">layout</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">)</span></div>
<div class="viewcode-block" id="GlobalAvgPool2D"><a class="viewcode-back" href="../../../../api/gluon/nn/index.html#mxnet.gluon.nn.GlobalAvgPool2D">[docs]</a><span class="k">class</span> <span class="nc">GlobalAvgPool2D</span><span class="p">(</span><span class="n">_Pooling</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;Global average pooling operation for spatial data.</span>
<span class="sd"> Parameters</span>
<span class="sd"> ----------</span>
<span class="sd"> layout : str, default &#39;NCHW&#39;</span>
<span class="sd"> Dimension ordering of data and out (&#39;NCHW&#39; or &#39;NHWC&#39;).</span>
<span class="sd"> &#39;N&#39;, &#39;C&#39;, &#39;H&#39;, &#39;W&#39; stands for batch, channel, height, and width</span>
<span class="sd"> dimensions respectively.</span>
<span class="sd"> Inputs:</span>
<span class="sd"> - **data**: 4D input tensor with shape</span>
<span class="sd"> `(batch_size, in_channels, height, width)` when `layout` is `NCHW`.</span>
<span class="sd"> For other layouts shape is permuted accordingly.</span>
<span class="sd"> Outputs:</span>
<span class="sd"> - **out**: 4D output tensor with shape</span>
<span class="sd"> `(batch_size, channels, 1, 1)` when `layout` is `NCHW`.</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">layout</span><span class="o">=</span><span class="s1">&#39;NCHW&#39;</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span>
<span class="k">assert</span> <span class="n">layout</span> <span class="ow">in</span> <span class="p">(</span><span class="s1">&#39;NCHW&#39;</span><span class="p">,</span> <span class="s1">&#39;NHWC&#39;</span><span class="p">),</span>\
<span class="s2">&quot;Only NCHW and NHWC layouts are valid for 2D Pooling&quot;</span>
<span class="nb">super</span><span class="p">(</span><span class="n">GlobalAvgPool2D</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="fm">__init__</span><span class="p">(</span>
<span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">),</span> <span class="kc">None</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="kc">True</span><span class="p">,</span> <span class="kc">True</span><span class="p">,</span> <span class="s1">&#39;avg&#39;</span><span class="p">,</span> <span class="n">layout</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">)</span></div>
<div class="viewcode-block" id="GlobalAvgPool3D"><a class="viewcode-back" href="../../../../api/gluon/nn/index.html#mxnet.gluon.nn.GlobalAvgPool3D">[docs]</a><span class="k">class</span> <span class="nc">GlobalAvgPool3D</span><span class="p">(</span><span class="n">_Pooling</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;Global average pooling operation for 3D data (spatial or spatio-temporal).</span>
<span class="sd"> Parameters</span>
<span class="sd"> ----------</span>
<span class="sd"> layout : str, default &#39;NCDHW&#39;</span>
<span class="sd"> Dimension ordering of data and out (&#39;NCDHW&#39; or &#39;NDHWC&#39;).</span>
<span class="sd"> &#39;N&#39;, &#39;C&#39;, &#39;H&#39;, &#39;W&#39;, &#39;D&#39; stands for batch, channel, height, width and</span>
<span class="sd"> depth dimensions respectively. padding is applied on &#39;D&#39;, &#39;H&#39; and &#39;W&#39;</span>
<span class="sd"> dimension.</span>
<span class="sd"> Inputs:</span>
<span class="sd"> - **data**: 5D input tensor with shape</span>
<span class="sd"> `(batch_size, in_channels, depth, height, width)` when `layout` is `NCDHW`.</span>
<span class="sd"> For other layouts shape is permuted accordingly.</span>
<span class="sd"> Outputs:</span>
<span class="sd"> - **out**: 5D output tensor with shape</span>
<span class="sd"> `(batch_size, channels, 1, 1, 1)` when `layout` is `NCDHW`.</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">layout</span><span class="o">=</span><span class="s1">&#39;NCDHW&#39;</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span>
<span class="k">assert</span> <span class="n">layout</span> <span class="ow">in</span> <span class="p">(</span><span class="s1">&#39;NCDHW&#39;</span><span class="p">,</span> <span class="s1">&#39;NDHWC&#39;</span><span class="p">),</span>\
<span class="s2">&quot;Only NCDHW and NDHWC layouts are valid for 3D Pooling&quot;</span>
<span class="nb">super</span><span class="p">(</span><span class="n">GlobalAvgPool3D</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="fm">__init__</span><span class="p">(</span>
<span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">),</span> <span class="kc">None</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="kc">True</span><span class="p">,</span> <span class="kc">True</span><span class="p">,</span> <span class="s1">&#39;avg&#39;</span><span class="p">,</span> <span class="n">layout</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">)</span></div>
<div class="viewcode-block" id="ReflectionPad2D"><a class="viewcode-back" href="../../../../api/gluon/nn/index.html#mxnet.gluon.nn.ReflectionPad2D">[docs]</a><span class="nd">@use_np</span>
<span class="k">class</span> <span class="nc">ReflectionPad2D</span><span class="p">(</span><span class="n">HybridBlock</span><span class="p">):</span>
<span class="w"> </span><span class="sa">r</span><span class="sd">&quot;&quot;&quot;Pads the input tensor using the reflection of the input boundary.</span>
<span class="sd"> Parameters</span>
<span class="sd"> ----------</span>
<span class="sd"> padding: int</span>
<span class="sd"> An integer padding size</span>
<span class="sd"> Inputs:</span>
<span class="sd"> - **data**: input tensor with the shape :math:`(N, C, H_{in}, W_{in})`.</span>
<span class="sd"> Outputs:</span>
<span class="sd"> - **out**: output tensor with the shape :math:`(N, C, H_{out}, W_{out})`, where</span>
<span class="sd"> .. math::</span>
<span class="sd"> H_{out} = H_{in} + 2 \cdot padding</span>
<span class="sd"> W_{out} = W_{in} + 2 \cdot padding</span>
<span class="sd"> Examples</span>
<span class="sd"> --------</span>
<span class="sd"> &gt;&gt;&gt; m = nn.ReflectionPad2D(3)</span>
<span class="sd"> &gt;&gt;&gt; input = mx.np.random.normal(size=(16, 3, 224, 224))</span>
<span class="sd"> &gt;&gt;&gt; output = m(input)</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">padding</span><span class="o">=</span><span class="mi">0</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span>
<span class="nb">super</span><span class="p">(</span><span class="n">ReflectionPad2D</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="fm">__init__</span><span class="p">(</span><span class="o">**</span><span class="n">kwargs</span><span class="p">)</span>
<span class="k">if</span> <span class="nb">isinstance</span><span class="p">(</span><span class="n">padding</span><span class="p">,</span> <span class="n">numeric_types</span><span class="p">):</span>
<span class="n">padding</span> <span class="o">=</span> <span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="n">padding</span><span class="p">,</span> <span class="n">padding</span><span class="p">,</span> <span class="n">padding</span><span class="p">,</span> <span class="n">padding</span><span class="p">)</span>
<span class="k">assert</span><span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="n">padding</span><span class="p">)</span> <span class="o">==</span> <span class="mi">8</span><span class="p">)</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_padding</span> <span class="o">=</span> <span class="n">padding</span>
<div class="viewcode-block" id="ReflectionPad2D.forward"><a class="viewcode-back" href="../../../../api/gluon/nn/index.html#mxnet.gluon.nn.ReflectionPad2D.forward">[docs]</a> <span class="k">def</span> <span class="nf">forward</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">x</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;</span>
<span class="sd"> Use pad operator in numpy extension module,</span>
<span class="sd"> which has backward support for reflect mode</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="k">return</span> <span class="n">npx</span><span class="o">.</span><span class="n">pad</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">mode</span><span class="o">=</span><span class="s1">&#39;reflect&#39;</span><span class="p">,</span> <span class="n">pad_width</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">_padding</span><span class="p">)</span></div></div>
<div class="viewcode-block" id="DeformableConvolution"><a class="viewcode-back" href="../../../../api/gluon/nn/index.html#mxnet.gluon.nn.DeformableConvolution">[docs]</a><span class="nd">@use_np</span>
<span class="k">class</span> <span class="nc">DeformableConvolution</span><span class="p">(</span><span class="n">HybridBlock</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;2-D Deformable Convolution v_1 (Dai, 2017).</span>
<span class="sd"> Normal Convolution uses sampling points in a regular grid, while the sampling</span>
<span class="sd"> points of Deformablem Convolution can be offset. The offset is learned with a</span>
<span class="sd"> separate convolution layer during the training. Both the convolution layer for</span>
<span class="sd"> generating the output features and the offsets are included in this gluon layer.</span>
<span class="sd"> Parameters</span>
<span class="sd"> ----------</span>
<span class="sd"> channels : int,</span>
<span class="sd"> The dimensionality of the output space</span>
<span class="sd"> i.e. the number of output channels in the convolution.</span>
<span class="sd"> kernel_size : int or tuple/list of 2 ints, (Default value = (1,1))</span>
<span class="sd"> Specifies the dimensions of the convolution window.</span>
<span class="sd"> strides : int or tuple/list of 2 ints, (Default value = (1,1))</span>
<span class="sd"> Specifies the strides of the convolution.</span>
<span class="sd"> padding : int or tuple/list of 2 ints, (Default value = (0,0))</span>
<span class="sd"> If padding is non-zero, then the input is implicitly zero-padded</span>
<span class="sd"> on both sides for padding number of points.</span>
<span class="sd"> dilation : int or tuple/list of 2 ints, (Default value = (1,1))</span>
<span class="sd"> Specifies the dilation rate to use for dilated convolution.</span>
<span class="sd"> groups : int, (Default value = 1)</span>
<span class="sd"> Controls the connections between inputs and outputs.</span>
<span class="sd"> At groups=1, all inputs are convolved to all outputs.</span>
<span class="sd"> At groups=2, the operation becomes equivalent to having two convolution</span>
<span class="sd"> layers side by side, each seeing half the input channels, and producing</span>
<span class="sd"> half the output channels, and both subsequently concatenated.</span>
<span class="sd"> num_deformable_group : int, (Default value = 1)</span>
<span class="sd"> Number of deformable group partitions.</span>
<span class="sd"> layout : str, (Default value = NCHW)</span>
<span class="sd"> Dimension ordering of data and weight. Can be &#39;NCW&#39;, &#39;NWC&#39;, &#39;NCHW&#39;,</span>
<span class="sd"> &#39;NHWC&#39;, &#39;NCDHW&#39;, &#39;NDHWC&#39;, etc. &#39;N&#39;, &#39;C&#39;, &#39;H&#39;, &#39;W&#39;, &#39;D&#39; stands for</span>
<span class="sd"> batch, channel, height, width and depth dimensions respectively.</span>
<span class="sd"> Convolution is performed over &#39;D&#39;, &#39;H&#39;, and &#39;W&#39; dimensions.</span>
<span class="sd"> use_bias : bool, (Default value = True)</span>
<span class="sd"> Whether the layer for generating the output features uses a bias vector.</span>
<span class="sd"> in_channels : int, (Default value = 0)</span>
<span class="sd"> The number of input channels to this layer. If not specified,</span>
<span class="sd"> initialization will be deferred to the first time `forward` is called</span>
<span class="sd"> and input channels will be inferred from the shape of input data.</span>
<span class="sd"> activation : str, (Default value = None)</span>
<span class="sd"> Activation function to use. See :func:`~mxnet.npx.activation`.</span>
<span class="sd"> If you don&#39;t specify anything, no activation is applied</span>
<span class="sd"> (ie. &quot;linear&quot; activation: `a(x) = x`).</span>
<span class="sd"> weight_initializer : str or `Initializer`, (Default value = None)</span>
<span class="sd"> Initializer for the `weight` weights matrix for the convolution layer</span>
<span class="sd"> for generating the output features.</span>
<span class="sd"> bias_initializer : str or `Initializer`, (Default value = zeros)</span>
<span class="sd"> Initializer for the bias vector for the convolution layer</span>
<span class="sd"> for generating the output features.</span>
<span class="sd"> offset_weight_initializer : str or `Initializer`, (Default value = zeros)</span>
<span class="sd"> Initializer for the `weight` weights matrix for the convolution layer</span>
<span class="sd"> for generating the offset.</span>
<span class="sd"> offset_bias_initializer : str or `Initializer`, (Default value = zeros),</span>
<span class="sd"> Initializer for the bias vector for the convolution layer</span>
<span class="sd"> for generating the offset.</span>
<span class="sd"> offset_use_bias: bool, (Default value = True)</span>
<span class="sd"> Whether the layer for generating the offset uses a bias vector.</span>
<span class="sd"> Inputs:</span>
<span class="sd"> - **data**: 4D input tensor with shape</span>
<span class="sd"> `(batch_size, in_channels, height, width)` when `layout` is `NCHW`.</span>
<span class="sd"> For other layouts shape is permuted accordingly.</span>
<span class="sd"> Outputs:</span>
<span class="sd"> - **out**: 4D output tensor with shape</span>
<span class="sd"> `(batch_size, channels, out_height, out_width)` when `layout` is `NCHW`.</span>
<span class="sd"> out_height and out_width are calculated as::</span>
<span class="sd"> out_height = floor((height+2*padding[0]-dilation[0]*(kernel_size[0]-1)-1)/stride[0])+1</span>
<span class="sd"> out_width = floor((width+2*padding[1]-dilation[1]*(kernel_size[1]-1)-1)/stride[1])+1</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">channels</span><span class="p">,</span> <span class="n">kernel_size</span><span class="o">=</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">),</span> <span class="n">strides</span><span class="o">=</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">),</span> <span class="n">padding</span><span class="o">=</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="mi">0</span><span class="p">),</span> <span class="n">dilation</span><span class="o">=</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">),</span> <span class="n">groups</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span>
<span class="n">num_deformable_group</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">layout</span><span class="o">=</span><span class="s1">&#39;NCHW&#39;</span><span class="p">,</span> <span class="n">use_bias</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span> <span class="n">in_channels</span><span class="o">=</span><span class="mi">0</span><span class="p">,</span> <span class="n">activation</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span>
<span class="n">weight_initializer</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">bias_initializer</span><span class="o">=</span><span class="s1">&#39;zeros&#39;</span><span class="p">,</span>
<span class="n">offset_weight_initializer</span><span class="o">=</span><span class="s1">&#39;zeros&#39;</span><span class="p">,</span> <span class="n">offset_bias_initializer</span><span class="o">=</span><span class="s1">&#39;zeros&#39;</span><span class="p">,</span> <span class="n">offset_use_bias</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span>
<span class="n">op_name</span><span class="o">=</span><span class="s1">&#39;DeformableConvolution&#39;</span><span class="p">,</span> <span class="n">adj</span><span class="o">=</span><span class="kc">None</span><span class="p">):</span>
<span class="nb">super</span><span class="p">(</span><span class="n">DeformableConvolution</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="fm">__init__</span><span class="p">()</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_channels</span> <span class="o">=</span> <span class="n">channels</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_in_channels</span> <span class="o">=</span> <span class="n">in_channels</span>
<span class="k">assert</span> <span class="n">layout</span> <span class="ow">in</span> <span class="p">(</span><span class="s1">&#39;NCHW&#39;</span><span class="p">,</span> <span class="s1">&#39;NHWC&#39;</span><span class="p">),</span> <span class="s2">&quot;Only supports &#39;NCHW&#39; and &#39;NHWC&#39; layout for now&quot;</span>
<span class="k">if</span> <span class="nb">isinstance</span><span class="p">(</span><span class="n">kernel_size</span><span class="p">,</span> <span class="n">numeric_types</span><span class="p">):</span>
<span class="n">kernel_size</span> <span class="o">=</span> <span class="p">(</span><span class="n">kernel_size</span><span class="p">,)</span> <span class="o">*</span> <span class="mi">2</span>
<span class="k">if</span> <span class="nb">isinstance</span><span class="p">(</span><span class="n">strides</span><span class="p">,</span> <span class="n">numeric_types</span><span class="p">):</span>
<span class="n">strides</span> <span class="o">=</span> <span class="p">(</span><span class="n">strides</span><span class="p">,)</span> <span class="o">*</span> <span class="nb">len</span><span class="p">(</span><span class="n">kernel_size</span><span class="p">)</span>
<span class="k">if</span> <span class="nb">isinstance</span><span class="p">(</span><span class="n">padding</span><span class="p">,</span> <span class="n">numeric_types</span><span class="p">):</span>
<span class="n">padding</span> <span class="o">=</span> <span class="p">(</span><span class="n">padding</span><span class="p">,)</span> <span class="o">*</span> <span class="nb">len</span><span class="p">(</span><span class="n">kernel_size</span><span class="p">)</span>
<span class="k">if</span> <span class="nb">isinstance</span><span class="p">(</span><span class="n">dilation</span><span class="p">,</span> <span class="n">numeric_types</span><span class="p">):</span>
<span class="n">dilation</span> <span class="o">=</span> <span class="p">(</span><span class="n">dilation</span><span class="p">,)</span> <span class="o">*</span> <span class="nb">len</span><span class="p">(</span><span class="n">kernel_size</span><span class="p">)</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_op_name</span> <span class="o">=</span> <span class="n">op_name</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_kernel_size</span> <span class="o">=</span> <span class="n">kernel_size</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_layout</span> <span class="o">=</span> <span class="n">layout</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_groups</span> <span class="o">=</span> <span class="n">groups</span>
<span class="n">offset_channels</span> <span class="o">=</span> <span class="mi">2</span> <span class="o">*</span> <span class="n">kernel_size</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="o">*</span> <span class="n">kernel_size</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span> <span class="o">*</span> <span class="n">num_deformable_group</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_offset_channels</span> <span class="o">=</span> <span class="n">offset_channels</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_kwargs_offset</span> <span class="o">=</span> <span class="p">{</span>
<span class="s1">&#39;kernel&#39;</span><span class="p">:</span> <span class="n">kernel_size</span><span class="p">,</span> <span class="s1">&#39;stride&#39;</span><span class="p">:</span> <span class="n">strides</span><span class="p">,</span> <span class="s1">&#39;dilate&#39;</span><span class="p">:</span> <span class="n">dilation</span><span class="p">,</span>
<span class="s1">&#39;pad&#39;</span><span class="p">:</span> <span class="n">padding</span><span class="p">,</span> <span class="s1">&#39;num_filter&#39;</span><span class="p">:</span> <span class="n">offset_channels</span><span class="p">,</span> <span class="s1">&#39;num_group&#39;</span><span class="p">:</span> <span class="n">groups</span><span class="p">,</span>
<span class="s1">&#39;no_bias&#39;</span><span class="p">:</span> <span class="ow">not</span> <span class="n">offset_use_bias</span><span class="p">,</span> <span class="s1">&#39;layout&#39;</span><span class="p">:</span> <span class="n">layout</span><span class="p">}</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_kwargs_deformable_conv</span> <span class="o">=</span> <span class="p">{</span>
<span class="s1">&#39;kernel&#39;</span><span class="p">:</span> <span class="n">kernel_size</span><span class="p">,</span> <span class="s1">&#39;stride&#39;</span><span class="p">:</span> <span class="n">strides</span><span class="p">,</span> <span class="s1">&#39;dilate&#39;</span><span class="p">:</span> <span class="n">dilation</span><span class="p">,</span>
<span class="s1">&#39;pad&#39;</span><span class="p">:</span> <span class="n">padding</span><span class="p">,</span> <span class="s1">&#39;num_filter&#39;</span><span class="p">:</span> <span class="n">channels</span><span class="p">,</span> <span class="s1">&#39;num_group&#39;</span><span class="p">:</span> <span class="n">groups</span><span class="p">,</span>
<span class="s1">&#39;num_deformable_group&#39;</span><span class="p">:</span> <span class="n">num_deformable_group</span><span class="p">,</span>
<span class="s1">&#39;no_bias&#39;</span><span class="p">:</span> <span class="ow">not</span> <span class="n">use_bias</span><span class="p">,</span> <span class="s1">&#39;layout&#39;</span><span class="p">:</span> <span class="n">layout</span><span class="p">}</span>
<span class="k">if</span> <span class="n">adj</span><span class="p">:</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_kwargs_offset</span><span class="p">[</span><span class="s1">&#39;adj&#39;</span><span class="p">]</span> <span class="o">=</span> <span class="n">adj</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_kwargs_deformable_conv</span><span class="p">[</span><span class="s1">&#39;adj&#39;</span><span class="p">]</span> <span class="o">=</span> <span class="n">adj</span>
<span class="bp">self</span><span class="o">.</span><span class="n">offset_weight</span> <span class="o">=</span> <span class="n">Parameter</span><span class="p">(</span><span class="s1">&#39;offset_weight&#39;</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">pre_infer_offset_weight</span><span class="p">(),</span>
<span class="n">init</span><span class="o">=</span><span class="n">offset_weight_initializer</span><span class="p">,</span>
<span class="n">allow_deferred_init</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<span class="k">if</span> <span class="n">offset_use_bias</span><span class="p">:</span>
<span class="bp">self</span><span class="o">.</span><span class="n">offset_bias</span> <span class="o">=</span> <span class="n">Parameter</span><span class="p">(</span><span class="s1">&#39;offset_bias&#39;</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">(</span><span class="n">offset_channels</span><span class="p">,),</span>
<span class="n">init</span><span class="o">=</span><span class="n">offset_bias_initializer</span><span class="p">,</span>
<span class="n">allow_deferred_init</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<span class="k">else</span><span class="p">:</span>
<span class="bp">self</span><span class="o">.</span><span class="n">offset_bias</span> <span class="o">=</span> <span class="kc">None</span>
<span class="bp">self</span><span class="o">.</span><span class="n">deformable_conv_weight</span> <span class="o">=</span> <span class="n">Parameter</span><span class="p">(</span><span class="s1">&#39;deformable_conv_weight&#39;</span><span class="p">,</span>
<span class="n">shape</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">pre_infer_weight</span><span class="p">(),</span>
<span class="n">init</span><span class="o">=</span><span class="n">weight_initializer</span><span class="p">,</span>
<span class="n">allow_deferred_init</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<span class="k">if</span> <span class="n">use_bias</span><span class="p">:</span>
<span class="bp">self</span><span class="o">.</span><span class="n">deformable_conv_bias</span> <span class="o">=</span> <span class="n">Parameter</span><span class="p">(</span><span class="s1">&#39;deformable_conv_bias&#39;</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">(</span><span class="n">channels</span><span class="p">,),</span>
<span class="n">init</span><span class="o">=</span><span class="n">bias_initializer</span><span class="p">,</span>
<span class="n">allow_deferred_init</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<span class="k">else</span><span class="p">:</span>
<span class="bp">self</span><span class="o">.</span><span class="n">deformable_conv_bias</span> <span class="o">=</span> <span class="kc">None</span>
<span class="k">if</span> <span class="n">activation</span><span class="p">:</span>
<span class="bp">self</span><span class="o">.</span><span class="n">act</span> <span class="o">=</span> <span class="n">Activation</span><span class="p">(</span><span class="n">activation</span><span class="p">)</span>
<span class="k">else</span><span class="p">:</span>
<span class="bp">self</span><span class="o">.</span><span class="n">act</span> <span class="o">=</span> <span class="kc">None</span>
<div class="viewcode-block" id="DeformableConvolution.forward"><a class="viewcode-back" href="../../../../api/gluon/nn/index.html#mxnet.gluon.nn.DeformableConvolution.forward">[docs]</a> <span class="k">def</span> <span class="nf">forward</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">x</span><span class="p">):</span>
<span class="n">device</span> <span class="o">=</span> <span class="n">x</span><span class="o">.</span><span class="n">device</span>
<span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">offset_bias</span> <span class="ow">is</span> <span class="kc">None</span><span class="p">:</span>
<span class="n">offset</span> <span class="o">=</span> <span class="n">npx</span><span class="o">.</span><span class="n">convolution</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">offset_weight</span><span class="o">.</span><span class="n">data</span><span class="p">(</span><span class="n">device</span><span class="p">),</span> <span class="n">cudnn_off</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span> <span class="o">**</span><span class="bp">self</span><span class="o">.</span><span class="n">_kwargs_offset</span><span class="p">)</span>
<span class="k">else</span><span class="p">:</span>
<span class="n">offset</span> <span class="o">=</span> <span class="n">npx</span><span class="o">.</span><span class="n">convolution</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">offset_weight</span><span class="o">.</span><span class="n">data</span><span class="p">(</span><span class="n">device</span><span class="p">),</span> <span class="bp">self</span><span class="o">.</span><span class="n">offset_bias</span><span class="o">.</span><span class="n">data</span><span class="p">(</span><span class="n">device</span><span class="p">),</span>
<span class="n">cudnn_off</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span> <span class="o">**</span><span class="bp">self</span><span class="o">.</span><span class="n">_kwargs_offset</span><span class="p">)</span>
<span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">deformable_conv_bias</span> <span class="ow">is</span> <span class="kc">None</span><span class="p">:</span>
<span class="n">act</span> <span class="o">=</span> <span class="n">npx</span><span class="o">.</span><span class="n">deformable_convolution</span><span class="p">(</span><span class="n">data</span><span class="o">=</span><span class="n">x</span><span class="p">,</span> <span class="n">offset</span><span class="o">=</span><span class="n">offset</span><span class="p">,</span>
<span class="n">weight</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">deformable_conv_weight</span><span class="o">.</span><span class="n">data</span><span class="p">(</span><span class="n">device</span><span class="p">),</span>
<span class="n">name</span><span class="o">=</span><span class="s1">&#39;fwd&#39;</span><span class="p">,</span> <span class="o">**</span><span class="bp">self</span><span class="o">.</span><span class="n">_kwargs_deformable_conv</span><span class="p">)</span>
<span class="k">else</span><span class="p">:</span>
<span class="n">act</span> <span class="o">=</span> <span class="n">npx</span><span class="o">.</span><span class="n">deformable_convolution</span><span class="p">(</span><span class="n">data</span><span class="o">=</span><span class="n">x</span><span class="p">,</span> <span class="n">offset</span><span class="o">=</span><span class="n">offset</span><span class="p">,</span>
<span class="n">weight</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">deformable_conv_weight</span><span class="o">.</span><span class="n">data</span><span class="p">(</span><span class="n">device</span><span class="p">),</span>
<span class="n">bias</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">deformable_conv_bias</span><span class="o">.</span><span class="n">data</span><span class="p">(</span><span class="n">device</span><span class="p">),</span> <span class="n">name</span><span class="o">=</span><span class="s1">&#39;fwd&#39;</span><span class="p">,</span>
<span class="o">**</span><span class="bp">self</span><span class="o">.</span><span class="n">_kwargs_deformable_conv</span><span class="p">)</span>
<span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">act</span><span class="p">:</span>
<span class="n">act</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">act</span><span class="p">(</span><span class="n">act</span><span class="p">)</span>
<span class="k">return</span> <span class="n">act</span></div>
<div class="viewcode-block" id="DeformableConvolution.pre_infer_offset_weight"><a class="viewcode-back" href="../../../../api/gluon/nn/index.html#mxnet.gluon.nn.DeformableConvolution.pre_infer_offset_weight">[docs]</a> <span class="k">def</span> <span class="nf">pre_infer_offset_weight</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;</span>
<span class="sd"> Pre-infer the shape of offsite weight parameter based on kernel size,</span>
<span class="sd"> group size and offset channels</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="n">wshape</span> <span class="o">=</span> <span class="p">[</span><span class="o">-</span><span class="mi">1</span><span class="p">]</span><span class="o">*</span><span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">_kernel_size</span><span class="p">)</span> <span class="o">+</span> <span class="mi">2</span><span class="p">)</span>
<span class="n">wshape</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;N&#39;</span><span class="p">)]</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_offset_channels</span> <span class="o">//</span> <span class="bp">self</span><span class="o">.</span><span class="n">_groups</span>
<span class="n">wshape</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;H&#39;</span><span class="p">)]</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_kernel_size</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span>
<span class="n">wshape</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;W&#39;</span><span class="p">)]</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_kernel_size</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span>
<span class="n">wshape</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="o">*=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_groups</span>
<span class="k">return</span> <span class="nb">tuple</span><span class="p">(</span><span class="n">wshape</span><span class="p">)</span></div>
<div class="viewcode-block" id="DeformableConvolution.pre_infer_weight"><a class="viewcode-back" href="../../../../api/gluon/nn/index.html#mxnet.gluon.nn.DeformableConvolution.pre_infer_weight">[docs]</a> <span class="k">def</span> <span class="nf">pre_infer_weight</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;</span>
<span class="sd"> Pre-infer the shape of weight parameter based on kernel size, group size and channels</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="n">wshape</span> <span class="o">=</span> <span class="p">[</span><span class="o">-</span><span class="mi">1</span><span class="p">]</span><span class="o">*</span><span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">_kernel_size</span><span class="p">)</span> <span class="o">+</span> <span class="mi">2</span><span class="p">)</span>
<span class="n">wshape</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;N&#39;</span><span class="p">)]</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_channels</span> <span class="o">//</span> <span class="bp">self</span><span class="o">.</span><span class="n">_groups</span>
<span class="n">wshape</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;H&#39;</span><span class="p">)]</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_kernel_size</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span>
<span class="n">wshape</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;W&#39;</span><span class="p">)]</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_kernel_size</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span>
<span class="n">wshape</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="o">*=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_groups</span>
<span class="k">return</span> <span class="nb">tuple</span><span class="p">(</span><span class="n">wshape</span><span class="p">)</span></div>
<div class="viewcode-block" id="DeformableConvolution.infer_shape"><a class="viewcode-back" href="../../../../api/gluon/nn/index.html#mxnet.gluon.nn.DeformableConvolution.infer_shape">[docs]</a> <span class="k">def</span> <span class="nf">infer_shape</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">x</span><span class="p">):</span>
<span class="n">dshape1</span> <span class="o">=</span> <span class="n">x</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;C&#39;</span><span class="p">)]</span>
<span class="n">wshape</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">deformable_conv_weight</span><span class="o">.</span><span class="n">shape</span>
<span class="n">wshape_offset</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">offset_weight</span><span class="o">.</span><span class="n">shape</span>
<span class="n">wshape_list</span> <span class="o">=</span> <span class="nb">list</span><span class="p">(</span><span class="n">wshape</span><span class="p">)</span>
<span class="n">wshape_offset_list</span> <span class="o">=</span> <span class="nb">list</span><span class="p">(</span><span class="n">wshape_offset</span><span class="p">)</span>
<span class="n">wshape_list</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;C&#39;</span><span class="p">)]</span> <span class="o">=</span> <span class="n">dshape1</span> <span class="o">//</span> <span class="bp">self</span><span class="o">.</span><span class="n">_groups</span>
<span class="n">wshape_offset_list</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;C&#39;</span><span class="p">)]</span> <span class="o">=</span> <span class="n">dshape1</span> <span class="o">//</span> <span class="bp">self</span><span class="o">.</span><span class="n">_groups</span>
<span class="bp">self</span><span class="o">.</span><span class="n">deformable_conv_weight</span><span class="o">.</span><span class="n">shape</span> <span class="o">=</span> <span class="nb">tuple</span><span class="p">(</span><span class="n">wshape_list</span><span class="p">)</span>
<span class="bp">self</span><span class="o">.</span><span class="n">offset_weight</span><span class="o">.</span><span class="n">shape</span> <span class="o">=</span> <span class="nb">tuple</span><span class="p">(</span><span class="n">wshape_offset_list</span><span class="p">)</span></div>
<span class="k">def</span> <span class="nf">_alias</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
<span class="k">return</span> <span class="s1">&#39;deformable_conv&#39;</span>
<span class="k">def</span> <span class="fm">__repr__</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
<span class="n">s</span> <span class="o">=</span> <span class="s1">&#39;</span><span class="si">{name}</span><span class="s1">(</span><span class="si">{mapping}</span><span class="s1">, kernel_size=</span><span class="si">{kernel}</span><span class="s1">, stride=</span><span class="si">{stride}</span><span class="s1">&#39;</span>
<span class="n">len_kernel_size</span> <span class="o">=</span> <span class="nb">len</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">_kwargs_deformable_conv</span><span class="p">[</span><span class="s1">&#39;kernel&#39;</span><span class="p">])</span>
<span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">_kwargs_deformable_conv</span><span class="p">[</span><span class="s1">&#39;pad&#39;</span><span class="p">]</span> <span class="o">!=</span> <span class="p">(</span><span class="mi">0</span><span class="p">,)</span> <span class="o">*</span> <span class="n">len_kernel_size</span><span class="p">:</span>
<span class="n">s</span> <span class="o">+=</span> <span class="s1">&#39;, padding=</span><span class="si">{pad}</span><span class="s1">&#39;</span>
<span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">_kwargs_deformable_conv</span><span class="p">[</span><span class="s1">&#39;dilate&#39;</span><span class="p">]</span> <span class="o">!=</span> <span class="p">(</span><span class="mi">1</span><span class="p">,)</span> <span class="o">*</span> <span class="n">len_kernel_size</span><span class="p">:</span>
<span class="n">s</span> <span class="o">+=</span> <span class="s1">&#39;, dilation=</span><span class="si">{dilate}</span><span class="s1">&#39;</span>
<span class="k">if</span> <span class="nb">hasattr</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="s1">&#39;out_pad&#39;</span><span class="p">)</span> <span class="ow">and</span> <span class="bp">self</span><span class="o">.</span><span class="n">out_pad</span> <span class="o">!=</span> <span class="p">(</span><span class="mi">0</span><span class="p">,)</span> <span class="o">*</span> <span class="n">len_kernel_size</span><span class="p">:</span>
<span class="n">s</span> <span class="o">+=</span> <span class="s1">&#39;, output_padding=</span><span class="si">{out_pad}</span><span class="s1">&#39;</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">out_pad</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">out_pad</span><span class="p">)</span>
<span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">_kwargs_deformable_conv</span><span class="p">[</span><span class="s1">&#39;num_group&#39;</span><span class="p">]</span> <span class="o">!=</span> <span class="mi">1</span><span class="p">:</span>
<span class="n">s</span> <span class="o">+=</span> <span class="s1">&#39;, groups=</span><span class="si">{num_group}</span><span class="s1">&#39;</span>
<span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">deformable_conv_bias</span> <span class="ow">is</span> <span class="kc">None</span><span class="p">:</span>
<span class="n">s</span> <span class="o">+=</span> <span class="s1">&#39;, bias=False&#39;</span>
<span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">act</span><span class="p">:</span>
<span class="n">s</span> <span class="o">+=</span> <span class="s1">&#39;, </span><span class="si">{}</span><span class="s1">&#39;</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">act</span><span class="p">)</span>
<span class="n">s</span> <span class="o">+=</span> <span class="s1">&#39;)&#39;</span>
<span class="n">shape</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">deformable_conv_weight</span><span class="o">.</span><span class="n">shape</span>
<span class="k">return</span> <span class="n">s</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="vm">__class__</span><span class="o">.</span><span class="vm">__name__</span><span class="p">,</span>
<span class="n">mapping</span><span class="o">=</span><span class="s1">&#39;</span><span class="si">{0}</span><span class="s1"> -&gt; </span><span class="si">{1}</span><span class="s1">&#39;</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">shape</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span> <span class="k">if</span> <span class="n">shape</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span> <span class="k">else</span> <span class="kc">None</span><span class="p">,</span> <span class="n">shape</span><span class="p">[</span><span class="mi">0</span><span class="p">]),</span>
<span class="o">**</span><span class="bp">self</span><span class="o">.</span><span class="n">_kwargs_deformable_conv</span><span class="p">)</span></div>
<div class="viewcode-block" id="ModulatedDeformableConvolution"><a class="viewcode-back" href="../../../../api/gluon/nn/index.html#mxnet.gluon.nn.ModulatedDeformableConvolution">[docs]</a><span class="nd">@use_np</span>
<span class="k">class</span> <span class="nc">ModulatedDeformableConvolution</span><span class="p">(</span><span class="n">HybridBlock</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;2-D Deformable Convolution v2 (Dai, 2018).</span>
<span class="sd"> The modulated deformable convolution operation is described in https://arxiv.org/abs/1811.11168</span>
<span class="sd"> Parameters</span>
<span class="sd"> ----------</span>
<span class="sd"> channels : int,</span>
<span class="sd"> The dimensionality of the output space</span>
<span class="sd"> i.e. the number of output channels in the convolution.</span>
<span class="sd"> kernel_size : int or tuple/list of 2 ints, (Default value = (1,1))</span>
<span class="sd"> Specifies the dimensions of the convolution window.</span>
<span class="sd"> strides : int or tuple/list of 2 ints, (Default value = (1,1))</span>
<span class="sd"> Specifies the strides of the convolution.</span>
<span class="sd"> padding : int or tuple/list of 2 ints, (Default value = (0,0))</span>
<span class="sd"> If padding is non-zero, then the input is implicitly zero-padded</span>
<span class="sd"> on both sides for padding number of points.</span>
<span class="sd"> dilation : int or tuple/list of 2 ints, (Default value = (1,1))</span>
<span class="sd"> Specifies the dilation rate to use for dilated convolution.</span>
<span class="sd"> groups : int, (Default value = 1)</span>
<span class="sd"> Controls the connections between inputs and outputs.</span>
<span class="sd"> At groups=1, all inputs are convolved to all outputs.</span>
<span class="sd"> At groups=2, the operation becomes equivalent to having two convolution</span>
<span class="sd"> layers side by side, each seeing half the input channels, and producing</span>
<span class="sd"> half the output channels, and both subsequently concatenated.</span>
<span class="sd"> num_deformable_group : int, (Default value = 1)</span>
<span class="sd"> Number of deformable group partitions.</span>
<span class="sd"> layout : str, (Default value = NCHW)</span>
<span class="sd"> Dimension ordering of data and weight. Can be &#39;NCW&#39;, &#39;NWC&#39;, &#39;NCHW&#39;,</span>
<span class="sd"> &#39;NHWC&#39;, &#39;NCDHW&#39;, &#39;NDHWC&#39;, etc. &#39;N&#39;, &#39;C&#39;, &#39;H&#39;, &#39;W&#39;, &#39;D&#39; stands for</span>
<span class="sd"> batch, channel, height, width and depth dimensions respectively.</span>
<span class="sd"> Convolution is performed over &#39;D&#39;, &#39;H&#39;, and &#39;W&#39; dimensions.</span>
<span class="sd"> use_bias : bool, (Default value = True)</span>
<span class="sd"> Whether the layer for generating the output features uses a bias vector.</span>
<span class="sd"> in_channels : int, (Default value = 0)</span>
<span class="sd"> The number of input channels to this layer. If not specified,</span>
<span class="sd"> initialization will be deferred to the first time `forward` is called</span>
<span class="sd"> and input channels will be inferred from the shape of input data.</span>
<span class="sd"> activation : str, (Default value = None)</span>
<span class="sd"> Activation function to use. See :func:`~mxnet.ndarray.Activation`.</span>
<span class="sd"> If you don&#39;t specify anything, no activation is applied</span>
<span class="sd"> (ie. &quot;linear&quot; activation: `a(x) = x`).</span>
<span class="sd"> weight_initializer : str or `Initializer`, (Default value = None)</span>
<span class="sd"> Initializer for the `weight` weights matrix for the convolution layer</span>
<span class="sd"> for generating the output features.</span>
<span class="sd"> bias_initializer : str or `Initializer`, (Default value = zeros)</span>
<span class="sd"> Initializer for the bias vector for the convolution layer</span>
<span class="sd"> for generating the output features.</span>
<span class="sd"> offset_weight_initializer : str or `Initializer`, (Default value = zeros)</span>
<span class="sd"> Initializer for the `weight` weights matrix for the convolution layer</span>
<span class="sd"> for generating the offset.</span>
<span class="sd"> offset_bias_initializer : str or `Initializer`, (Default value = zeros),</span>
<span class="sd"> Initializer for the bias vector for the convolution layer</span>
<span class="sd"> for generating the offset.</span>
<span class="sd"> offset_use_bias: bool, (Default value = True)</span>
<span class="sd"> Whether the layer for generating the offset uses a bias vector.</span>
<span class="sd"> Inputs:</span>
<span class="sd"> - **data**: 4D input tensor with shape</span>
<span class="sd"> `(batch_size, in_channels, height, width)` when `layout` is `NCHW`.</span>
<span class="sd"> For other layouts shape is permuted accordingly.</span>
<span class="sd"> Outputs:</span>
<span class="sd"> - **out**: 4D output tensor with shape</span>
<span class="sd"> `(batch_size, channels, out_height, out_width)` when `layout` is `NCHW`.</span>
<span class="sd"> out_height and out_width are calculated as::</span>
<span class="sd"> out_height = floor((height+2*padding[0]-dilation[0]*(kernel_size[0]-1)-1)/stride[0])+1</span>
<span class="sd"> out_width = floor((width+2*padding[1]-dilation[1]*(kernel_size[1]-1)-1)/stride[1])+1</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">channels</span><span class="p">,</span> <span class="n">kernel_size</span><span class="o">=</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">),</span> <span class="n">strides</span><span class="o">=</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">),</span> <span class="n">padding</span><span class="o">=</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="mi">0</span><span class="p">),</span> <span class="n">dilation</span><span class="o">=</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">),</span> <span class="n">groups</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span>
<span class="n">num_deformable_group</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">layout</span><span class="o">=</span><span class="s1">&#39;NCHW&#39;</span><span class="p">,</span> <span class="n">use_bias</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span> <span class="n">in_channels</span><span class="o">=</span><span class="mi">0</span><span class="p">,</span> <span class="n">activation</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span>
<span class="n">weight_initializer</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">bias_initializer</span><span class="o">=</span><span class="s1">&#39;zeros&#39;</span><span class="p">,</span>
<span class="n">offset_weight_initializer</span><span class="o">=</span><span class="s1">&#39;zeros&#39;</span><span class="p">,</span> <span class="n">offset_bias_initializer</span><span class="o">=</span><span class="s1">&#39;zeros&#39;</span><span class="p">,</span> <span class="n">offset_use_bias</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span>
<span class="n">op_name</span><span class="o">=</span><span class="s1">&#39;ModulatedDeformableConvolution&#39;</span><span class="p">,</span> <span class="n">adj</span><span class="o">=</span><span class="kc">None</span><span class="p">):</span>
<span class="nb">super</span><span class="p">(</span><span class="n">ModulatedDeformableConvolution</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="fm">__init__</span><span class="p">()</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_channels</span> <span class="o">=</span> <span class="n">channels</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_in_channels</span> <span class="o">=</span> <span class="n">in_channels</span>
<span class="k">assert</span> <span class="n">layout</span> <span class="ow">in</span> <span class="p">(</span><span class="s1">&#39;NCHW&#39;</span><span class="p">,</span> <span class="s1">&#39;NHWC&#39;</span><span class="p">),</span> <span class="s2">&quot;Only supports &#39;NCHW&#39; and &#39;NHWC&#39; layout for now&quot;</span>
<span class="k">if</span> <span class="nb">isinstance</span><span class="p">(</span><span class="n">kernel_size</span><span class="p">,</span> <span class="n">numeric_types</span><span class="p">):</span>
<span class="n">kernel_size</span> <span class="o">=</span> <span class="p">(</span><span class="n">kernel_size</span><span class="p">,)</span> <span class="o">*</span> <span class="mi">2</span>
<span class="k">if</span> <span class="nb">isinstance</span><span class="p">(</span><span class="n">strides</span><span class="p">,</span> <span class="n">numeric_types</span><span class="p">):</span>
<span class="n">strides</span> <span class="o">=</span> <span class="p">(</span><span class="n">strides</span><span class="p">,)</span> <span class="o">*</span> <span class="nb">len</span><span class="p">(</span><span class="n">kernel_size</span><span class="p">)</span>
<span class="k">if</span> <span class="nb">isinstance</span><span class="p">(</span><span class="n">padding</span><span class="p">,</span> <span class="n">numeric_types</span><span class="p">):</span>
<span class="n">padding</span> <span class="o">=</span> <span class="p">(</span><span class="n">padding</span><span class="p">,)</span> <span class="o">*</span> <span class="nb">len</span><span class="p">(</span><span class="n">kernel_size</span><span class="p">)</span>
<span class="k">if</span> <span class="nb">isinstance</span><span class="p">(</span><span class="n">dilation</span><span class="p">,</span> <span class="n">numeric_types</span><span class="p">):</span>
<span class="n">dilation</span> <span class="o">=</span> <span class="p">(</span><span class="n">dilation</span><span class="p">,)</span> <span class="o">*</span> <span class="nb">len</span><span class="p">(</span><span class="n">kernel_size</span><span class="p">)</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_op_name</span> <span class="o">=</span> <span class="n">op_name</span>
<span class="n">offset_channels</span> <span class="o">=</span> <span class="n">num_deformable_group</span> <span class="o">*</span> <span class="mi">3</span> <span class="o">*</span> <span class="n">kernel_size</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="o">*</span> <span class="n">kernel_size</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span>
<span class="bp">self</span><span class="o">.</span><span class="n">offset_split_index</span> <span class="o">=</span> <span class="n">num_deformable_group</span> <span class="o">*</span> <span class="mi">2</span> <span class="o">*</span> <span class="n">kernel_size</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="o">*</span> <span class="n">kernel_size</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_layout</span> <span class="o">=</span> <span class="n">layout</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_groups</span> <span class="o">=</span> <span class="n">groups</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_offset_channels</span> <span class="o">=</span> <span class="n">offset_channels</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_kernel_size</span> <span class="o">=</span> <span class="n">kernel_size</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_kwargs_offset</span> <span class="o">=</span> <span class="p">{</span>
<span class="s1">&#39;kernel&#39;</span><span class="p">:</span> <span class="n">kernel_size</span><span class="p">,</span> <span class="s1">&#39;stride&#39;</span><span class="p">:</span> <span class="n">strides</span><span class="p">,</span> <span class="s1">&#39;dilate&#39;</span><span class="p">:</span> <span class="n">dilation</span><span class="p">,</span>
<span class="s1">&#39;pad&#39;</span><span class="p">:</span> <span class="n">padding</span><span class="p">,</span> <span class="s1">&#39;num_filter&#39;</span><span class="p">:</span> <span class="n">offset_channels</span><span class="p">,</span> <span class="s1">&#39;num_group&#39;</span><span class="p">:</span> <span class="n">groups</span><span class="p">,</span>
<span class="s1">&#39;no_bias&#39;</span><span class="p">:</span> <span class="ow">not</span> <span class="n">offset_use_bias</span><span class="p">,</span> <span class="s1">&#39;layout&#39;</span><span class="p">:</span> <span class="n">layout</span><span class="p">}</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_kwargs_deformable_conv</span> <span class="o">=</span> <span class="p">{</span>
<span class="s1">&#39;kernel&#39;</span><span class="p">:</span> <span class="n">kernel_size</span><span class="p">,</span> <span class="s1">&#39;stride&#39;</span><span class="p">:</span> <span class="n">strides</span><span class="p">,</span> <span class="s1">&#39;dilate&#39;</span><span class="p">:</span> <span class="n">dilation</span><span class="p">,</span>
<span class="s1">&#39;pad&#39;</span><span class="p">:</span> <span class="n">padding</span><span class="p">,</span> <span class="s1">&#39;num_filter&#39;</span><span class="p">:</span> <span class="n">channels</span><span class="p">,</span> <span class="s1">&#39;num_group&#39;</span><span class="p">:</span> <span class="n">groups</span><span class="p">,</span>
<span class="s1">&#39;num_deformable_group&#39;</span><span class="p">:</span> <span class="n">num_deformable_group</span><span class="p">,</span>
<span class="s1">&#39;no_bias&#39;</span><span class="p">:</span> <span class="ow">not</span> <span class="n">use_bias</span><span class="p">,</span> <span class="s1">&#39;layout&#39;</span><span class="p">:</span> <span class="n">layout</span><span class="p">}</span>
<span class="k">if</span> <span class="n">adj</span><span class="p">:</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_kwargs_offset</span><span class="p">[</span><span class="s1">&#39;adj&#39;</span><span class="p">]</span> <span class="o">=</span> <span class="n">adj</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_kwargs_deformable_conv</span><span class="p">[</span><span class="s1">&#39;adj&#39;</span><span class="p">]</span> <span class="o">=</span> <span class="n">adj</span>
<span class="bp">self</span><span class="o">.</span><span class="n">deformable_conv_weight</span> <span class="o">=</span> <span class="n">Parameter</span><span class="p">(</span><span class="s1">&#39;deformable_conv_weight&#39;</span><span class="p">,</span>
<span class="n">shape</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">pre_infer_weight</span><span class="p">(),</span>
<span class="n">init</span><span class="o">=</span><span class="n">weight_initializer</span><span class="p">,</span>
<span class="n">allow_deferred_init</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<span class="k">if</span> <span class="n">use_bias</span><span class="p">:</span>
<span class="bp">self</span><span class="o">.</span><span class="n">deformable_conv_bias</span> <span class="o">=</span> <span class="n">Parameter</span><span class="p">(</span><span class="s1">&#39;deformable_conv_bias&#39;</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">(</span><span class="n">channels</span><span class="p">,),</span>
<span class="n">init</span><span class="o">=</span><span class="n">bias_initializer</span><span class="p">,</span>
<span class="n">allow_deferred_init</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<span class="k">else</span><span class="p">:</span>
<span class="bp">self</span><span class="o">.</span><span class="n">deformable_conv_bias</span> <span class="o">=</span> <span class="kc">None</span>
<span class="bp">self</span><span class="o">.</span><span class="n">offset_weight</span> <span class="o">=</span> <span class="n">Parameter</span><span class="p">(</span><span class="s1">&#39;offset_weight&#39;</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">pre_infer_offset_weight</span><span class="p">(),</span>
<span class="n">init</span><span class="o">=</span><span class="n">offset_weight_initializer</span><span class="p">,</span>
<span class="n">allow_deferred_init</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<span class="k">if</span> <span class="n">offset_use_bias</span><span class="p">:</span>
<span class="bp">self</span><span class="o">.</span><span class="n">offset_bias</span> <span class="o">=</span> <span class="n">Parameter</span><span class="p">(</span><span class="s1">&#39;offset_bias&#39;</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">(</span><span class="n">offset_channels</span><span class="p">,),</span>
<span class="n">init</span><span class="o">=</span><span class="n">offset_bias_initializer</span><span class="p">,</span>
<span class="n">allow_deferred_init</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<span class="k">else</span><span class="p">:</span>
<span class="bp">self</span><span class="o">.</span><span class="n">offset_bias</span> <span class="o">=</span> <span class="kc">None</span>
<span class="k">if</span> <span class="n">activation</span><span class="p">:</span>
<span class="bp">self</span><span class="o">.</span><span class="n">act</span> <span class="o">=</span> <span class="n">Activation</span><span class="p">(</span><span class="n">activation</span><span class="p">)</span>
<span class="k">else</span><span class="p">:</span>
<span class="bp">self</span><span class="o">.</span><span class="n">act</span> <span class="o">=</span> <span class="kc">None</span>
<div class="viewcode-block" id="ModulatedDeformableConvolution.forward"><a class="viewcode-back" href="../../../../api/gluon/nn/index.html#mxnet.gluon.nn.ModulatedDeformableConvolution.forward">[docs]</a> <span class="k">def</span> <span class="nf">forward</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">x</span><span class="p">):</span>
<span class="n">device</span> <span class="o">=</span> <span class="n">x</span><span class="o">.</span><span class="n">device</span>
<span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">offset_bias</span> <span class="ow">is</span> <span class="kc">None</span><span class="p">:</span>
<span class="n">offset</span> <span class="o">=</span> <span class="n">npx</span><span class="o">.</span><span class="n">convolution</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">offset_weight</span><span class="o">.</span><span class="n">data</span><span class="p">(</span><span class="n">device</span><span class="p">),</span>
<span class="n">cudnn_off</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span> <span class="o">**</span><span class="bp">self</span><span class="o">.</span><span class="n">_kwargs_offset</span><span class="p">)</span>
<span class="k">else</span><span class="p">:</span>
<span class="n">offset</span> <span class="o">=</span> <span class="n">npx</span><span class="o">.</span><span class="n">convolution</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">offset_weight</span><span class="o">.</span><span class="n">data</span><span class="p">(</span><span class="n">device</span><span class="p">),</span>
<span class="bp">self</span><span class="o">.</span><span class="n">offset_bias</span><span class="o">.</span><span class="n">data</span><span class="p">(</span><span class="n">device</span><span class="p">),</span> <span class="n">cudnn_off</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span> <span class="o">**</span><span class="bp">self</span><span class="o">.</span><span class="n">_kwargs_offset</span><span class="p">)</span>
<span class="n">offset_t</span> <span class="o">=</span> <span class="n">npx</span><span class="o">.</span><span class="n">slice_axis</span><span class="p">(</span><span class="n">offset</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">begin</span><span class="o">=</span><span class="mi">0</span><span class="p">,</span> <span class="n">end</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">offset_split_index</span><span class="p">)</span>
<span class="n">mask</span> <span class="o">=</span> <span class="n">npx</span><span class="o">.</span><span class="n">slice_axis</span><span class="p">(</span><span class="n">offset</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">begin</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">offset_split_index</span><span class="p">,</span> <span class="n">end</span><span class="o">=</span><span class="kc">None</span><span class="p">)</span>
<span class="n">mask</span> <span class="o">=</span> <span class="n">npx</span><span class="o">.</span><span class="n">sigmoid</span><span class="p">(</span><span class="n">mask</span><span class="p">)</span> <span class="o">*</span> <span class="mi">2</span>
<span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">deformable_conv_bias</span> <span class="ow">is</span> <span class="kc">None</span><span class="p">:</span>
<span class="n">act</span> <span class="o">=</span> <span class="n">npx</span><span class="o">.</span><span class="n">modulated_deformable_convolution</span><span class="p">(</span><span class="n">data</span><span class="o">=</span><span class="n">x</span><span class="p">,</span> <span class="n">offset</span><span class="o">=</span><span class="n">offset_t</span><span class="p">,</span> <span class="n">mask</span><span class="o">=</span><span class="n">mask</span><span class="p">,</span>
<span class="n">weight</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">deformable_conv_weight</span><span class="o">.</span><span class="n">data</span><span class="p">(</span><span class="n">device</span><span class="p">),</span>
<span class="n">name</span><span class="o">=</span><span class="s1">&#39;fwd&#39;</span><span class="p">,</span> <span class="o">**</span><span class="bp">self</span><span class="o">.</span><span class="n">_kwargs_deformable_conv</span><span class="p">)</span>
<span class="k">else</span><span class="p">:</span>
<span class="n">act</span> <span class="o">=</span> <span class="n">npx</span><span class="o">.</span><span class="n">modulated_deformable_convolution</span><span class="p">(</span><span class="n">data</span><span class="o">=</span><span class="n">x</span><span class="p">,</span> <span class="n">offset</span><span class="o">=</span><span class="n">offset_t</span><span class="p">,</span> <span class="n">mask</span><span class="o">=</span><span class="n">mask</span><span class="p">,</span>
<span class="n">weight</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">deformable_conv_weight</span><span class="o">.</span><span class="n">data</span><span class="p">(</span><span class="n">device</span><span class="p">),</span>
<span class="n">bias</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">deformable_conv_bias</span><span class="o">.</span><span class="n">data</span><span class="p">(</span><span class="n">device</span><span class="p">),</span> <span class="n">name</span><span class="o">=</span><span class="s1">&#39;fwd&#39;</span><span class="p">,</span>
<span class="o">**</span><span class="bp">self</span><span class="o">.</span><span class="n">_kwargs_deformable_conv</span><span class="p">)</span>
<span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">act</span><span class="p">:</span>
<span class="n">act</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">act</span><span class="p">(</span><span class="n">act</span><span class="p">)</span>
<span class="k">return</span> <span class="n">act</span></div>
<div class="viewcode-block" id="ModulatedDeformableConvolution.pre_infer_offset_weight"><a class="viewcode-back" href="../../../../api/gluon/nn/index.html#mxnet.gluon.nn.ModulatedDeformableConvolution.pre_infer_offset_weight">[docs]</a> <span class="k">def</span> <span class="nf">pre_infer_offset_weight</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;</span>
<span class="sd"> Pre-infer the shape of offsite weight parameter based on kernel size,</span>
<span class="sd"> group size and offset channels</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="n">wshape</span> <span class="o">=</span> <span class="p">[</span><span class="o">-</span><span class="mi">1</span><span class="p">]</span><span class="o">*</span><span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">_kernel_size</span><span class="p">)</span> <span class="o">+</span> <span class="mi">2</span><span class="p">)</span>
<span class="n">wshape</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;N&#39;</span><span class="p">)]</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_offset_channels</span> <span class="o">//</span> <span class="bp">self</span><span class="o">.</span><span class="n">_groups</span>
<span class="n">wshape</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;H&#39;</span><span class="p">)]</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_kernel_size</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span>
<span class="n">wshape</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;W&#39;</span><span class="p">)]</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_kernel_size</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span>
<span class="n">wshape</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="o">*=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_groups</span>
<span class="k">return</span> <span class="nb">tuple</span><span class="p">(</span><span class="n">wshape</span><span class="p">)</span></div>
<div class="viewcode-block" id="ModulatedDeformableConvolution.pre_infer_weight"><a class="viewcode-back" href="../../../../api/gluon/nn/index.html#mxnet.gluon.nn.ModulatedDeformableConvolution.pre_infer_weight">[docs]</a> <span class="k">def</span> <span class="nf">pre_infer_weight</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;</span>
<span class="sd"> Pre-infer the shape of weight parameter based on kernel size, group size and channels</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="n">wshape</span> <span class="o">=</span> <span class="p">[</span><span class="o">-</span><span class="mi">1</span><span class="p">]</span><span class="o">*</span><span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">_kernel_size</span><span class="p">)</span> <span class="o">+</span> <span class="mi">2</span><span class="p">)</span>
<span class="n">wshape</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;N&#39;</span><span class="p">)]</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_channels</span> <span class="o">//</span> <span class="bp">self</span><span class="o">.</span><span class="n">_groups</span>
<span class="n">wshape</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;H&#39;</span><span class="p">)]</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_kernel_size</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span>
<span class="n">wshape</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;W&#39;</span><span class="p">)]</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_kernel_size</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span>
<span class="n">wshape</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="o">*=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_groups</span>
<span class="k">return</span> <span class="nb">tuple</span><span class="p">(</span><span class="n">wshape</span><span class="p">)</span></div>
<div class="viewcode-block" id="ModulatedDeformableConvolution.infer_shape"><a class="viewcode-back" href="../../../../api/gluon/nn/index.html#mxnet.gluon.nn.ModulatedDeformableConvolution.infer_shape">[docs]</a> <span class="k">def</span> <span class="nf">infer_shape</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">x</span><span class="p">):</span>
<span class="n">dshape1</span> <span class="o">=</span> <span class="n">x</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;C&#39;</span><span class="p">)]</span>
<span class="n">wshape</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">deformable_conv_weight</span><span class="o">.</span><span class="n">shape</span>
<span class="n">wshape_offset</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">offset_weight</span><span class="o">.</span><span class="n">shape</span>
<span class="n">wshape_list</span> <span class="o">=</span> <span class="nb">list</span><span class="p">(</span><span class="n">wshape</span><span class="p">)</span>
<span class="n">wshape_offset_list</span> <span class="o">=</span> <span class="nb">list</span><span class="p">(</span><span class="n">wshape_offset</span><span class="p">)</span>
<span class="n">wshape_list</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;C&#39;</span><span class="p">)]</span> <span class="o">=</span> <span class="n">dshape1</span> <span class="o">//</span> <span class="bp">self</span><span class="o">.</span><span class="n">_groups</span>
<span class="n">wshape_offset_list</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_layout</span><span class="o">.</span><span class="n">find</span><span class="p">(</span><span class="s1">&#39;C&#39;</span><span class="p">)]</span> <span class="o">=</span> <span class="n">dshape1</span> <span class="o">//</span> <span class="bp">self</span><span class="o">.</span><span class="n">_groups</span>
<span class="bp">self</span><span class="o">.</span><span class="n">deformable_conv_weight</span><span class="o">.</span><span class="n">shape</span> <span class="o">=</span> <span class="nb">tuple</span><span class="p">(</span><span class="n">wshape_list</span><span class="p">)</span>
<span class="bp">self</span><span class="o">.</span><span class="n">offset_weight</span><span class="o">.</span><span class="n">shape</span> <span class="o">=</span> <span class="nb">tuple</span><span class="p">(</span><span class="n">wshape_offset_list</span><span class="p">)</span></div>
<span class="k">def</span> <span class="nf">_alias</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
<span class="k">return</span> <span class="s1">&#39;modulated_deformable_conv&#39;</span></div>
<div class="viewcode-block" id="PixelShuffle1D"><a class="viewcode-back" href="../../../../api/gluon/nn/index.html#mxnet.gluon.nn.PixelShuffle1D">[docs]</a><span class="nd">@use_np</span>
<span class="k">class</span> <span class="nc">PixelShuffle1D</span><span class="p">(</span><span class="n">HybridBlock</span><span class="p">):</span>
<span class="w"> </span><span class="sa">r</span><span class="sd">&quot;&quot;&quot;Pixel-shuffle layer for upsampling in 1 dimension.</span>
<span class="sd"> Pixel-shuffling is the operation of taking groups of values along</span>
<span class="sd"> the *channel* dimension and regrouping them into blocks of pixels</span>
<span class="sd"> along the ``W`` dimension, thereby effectively multiplying that dimension</span>
<span class="sd"> by a constant factor in size.</span>
<span class="sd"> For example, a feature map of shape :math:`(fC, W)` is reshaped</span>
<span class="sd"> into :math:`(C, fW)` by forming little value groups of size :math:`f`</span>
<span class="sd"> and arranging them in a grid of size :math:`W`.</span>
<span class="sd"> Parameters</span>
<span class="sd"> ----------</span>
<span class="sd"> factor : int or 1-tuple of int</span>
<span class="sd"> Upsampling factor, applied to the ``W`` dimension.</span>
<span class="sd"> Inputs:</span>
<span class="sd"> - **data**: Tensor of shape ``(N, f*C, W)``.</span>
<span class="sd"> Outputs:</span>
<span class="sd"> - **out**: Tensor of shape ``(N, C, W*f)``.</span>
<span class="sd"> Examples</span>
<span class="sd"> --------</span>
<span class="sd"> &gt;&gt;&gt; pxshuf = PixelShuffle1D(2)</span>
<span class="sd"> &gt;&gt;&gt; x = mx.np.zeros((1, 8, 3))</span>
<span class="sd"> &gt;&gt;&gt; pxshuf(x).shape</span>
<span class="sd"> (1, 4, 6)</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">factor</span><span class="p">):</span>
<span class="nb">super</span><span class="p">(</span><span class="n">PixelShuffle1D</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="fm">__init__</span><span class="p">()</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_factor</span> <span class="o">=</span> <span class="nb">int</span><span class="p">(</span><span class="n">factor</span><span class="p">)</span>
<div class="viewcode-block" id="PixelShuffle1D.forward"><a class="viewcode-back" href="../../../../api/gluon/nn/index.html#mxnet.gluon.nn.PixelShuffle1D.forward">[docs]</a> <span class="k">def</span> <span class="nf">forward</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">x</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;Perform pixel-shuffling on the input.&quot;&quot;&quot;</span>
<span class="n">f</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_factor</span> <span class="c1"># (N, C*f, W)</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">npx</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="p">(</span><span class="o">-</span><span class="mi">2</span><span class="p">,</span> <span class="o">-</span><span class="mi">6</span><span class="p">,</span> <span class="o">-</span><span class="mi">1</span><span class="p">,</span> <span class="n">f</span><span class="p">,</span> <span class="o">-</span><span class="mi">2</span><span class="p">))</span> <span class="c1"># (N, C, f, W)</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">transpose</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">2</span><span class="p">))</span> <span class="c1"># (N, C, W, f)</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">npx</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="p">(</span><span class="o">-</span><span class="mi">2</span><span class="p">,</span> <span class="o">-</span><span class="mi">2</span><span class="p">,</span> <span class="o">-</span><span class="mi">5</span><span class="p">))</span> <span class="c1"># (N, C, W*f)</span>
<span class="k">return</span> <span class="n">x</span></div>
<span class="k">def</span> <span class="fm">__repr__</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
<span class="k">return</span> <span class="s2">&quot;</span><span class="si">{}</span><span class="s2">(</span><span class="si">{}</span><span class="s2">)&quot;</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="vm">__class__</span><span class="o">.</span><span class="vm">__name__</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">_factor</span><span class="p">)</span></div>
<div class="viewcode-block" id="PixelShuffle2D"><a class="viewcode-back" href="../../../../api/gluon/nn/index.html#mxnet.gluon.nn.PixelShuffle2D">[docs]</a><span class="nd">@use_np</span>
<span class="k">class</span> <span class="nc">PixelShuffle2D</span><span class="p">(</span><span class="n">HybridBlock</span><span class="p">):</span>
<span class="w"> </span><span class="sa">r</span><span class="sd">&quot;&quot;&quot;Pixel-shuffle layer for upsampling in 2 dimensions.</span>
<span class="sd"> Pixel-shuffling is the operation of taking groups of values along</span>
<span class="sd"> the *channel* dimension and regrouping them into blocks of pixels</span>
<span class="sd"> along the ``H`` and ``W`` dimensions, thereby effectively multiplying</span>
<span class="sd"> those dimensions by a constant factor in size.</span>
<span class="sd"> For example, a feature map of shape :math:`(f^2 C, H, W)` is reshaped</span>
<span class="sd"> into :math:`(C, fH, fW)` by forming little :math:`f \times f` blocks</span>
<span class="sd"> of pixels and arranging them in an :math:`H \times W` grid.</span>
<span class="sd"> Pixel-shuffling together with regular convolution is an alternative,</span>
<span class="sd"> learnable way of upsampling an image by arbitrary factors. It is reported</span>
<span class="sd"> to help overcome checkerboard artifacts that are common in upsampling with</span>
<span class="sd"> transposed convolutions (also called deconvolutions). See the paper</span>
<span class="sd"> `Real-Time Single Image and Video Super-Resolution Using an Efficient</span>
<span class="sd"> Sub-Pixel Convolutional Neural Network &lt;https://arxiv.org/abs/1609.05158&gt;`_</span>
<span class="sd"> for further details.</span>
<span class="sd"> Parameters</span>
<span class="sd"> ----------</span>
<span class="sd"> factor : int or 2-tuple of int</span>
<span class="sd"> Upsampling factors, applied to the ``H`` and ``W`` dimensions,</span>
<span class="sd"> in that order.</span>
<span class="sd"> Inputs:</span>
<span class="sd"> - **data**: Tensor of shape ``(N, f1*f2*C, H, W)``.</span>
<span class="sd"> Outputs:</span>
<span class="sd"> - **out**: Tensor of shape ``(N, C, H*f1, W*f2)``.</span>
<span class="sd"> Examples</span>
<span class="sd"> --------</span>
<span class="sd"> &gt;&gt;&gt; pxshuf = PixelShuffle2D((2, 3))</span>
<span class="sd"> &gt;&gt;&gt; x = mx.np.zeros((1, 12, 3, 5))</span>
<span class="sd"> &gt;&gt;&gt; pxshuf(x).shape</span>
<span class="sd"> (1, 2, 6, 15)</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">factor</span><span class="p">):</span>
<span class="nb">super</span><span class="p">(</span><span class="n">PixelShuffle2D</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="fm">__init__</span><span class="p">()</span>
<span class="k">try</span><span class="p">:</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_factors</span> <span class="o">=</span> <span class="p">(</span><span class="nb">int</span><span class="p">(</span><span class="n">factor</span><span class="p">),)</span> <span class="o">*</span> <span class="mi">2</span>
<span class="k">except</span> <span class="ne">TypeError</span><span class="p">:</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_factors</span> <span class="o">=</span> <span class="nb">tuple</span><span class="p">(</span><span class="nb">int</span><span class="p">(</span><span class="n">fac</span><span class="p">)</span> <span class="k">for</span> <span class="n">fac</span> <span class="ow">in</span> <span class="n">factor</span><span class="p">)</span>
<span class="k">assert</span> <span class="nb">len</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">_factors</span><span class="p">)</span> <span class="o">==</span> <span class="mi">2</span><span class="p">,</span> <span class="s2">&quot;wrong length </span><span class="si">{}</span><span class="s2">&quot;</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">_factors</span><span class="p">))</span>
<div class="viewcode-block" id="PixelShuffle2D.forward"><a class="viewcode-back" href="../../../../api/gluon/nn/index.html#mxnet.gluon.nn.PixelShuffle2D.forward">[docs]</a> <span class="k">def</span> <span class="nf">forward</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">x</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;Perform pixel-shuffling on the input.&quot;&quot;&quot;</span>
<span class="n">f1</span><span class="p">,</span> <span class="n">f2</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_factors</span>
<span class="c1"># (N, f1*f2*C, H, W)</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">npx</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="p">(</span><span class="o">-</span><span class="mi">2</span><span class="p">,</span> <span class="o">-</span><span class="mi">6</span><span class="p">,</span> <span class="o">-</span><span class="mi">1</span><span class="p">,</span> <span class="n">f1</span> <span class="o">*</span> <span class="n">f2</span><span class="p">,</span> <span class="o">-</span><span class="mi">2</span><span class="p">,</span> <span class="o">-</span><span class="mi">2</span><span class="p">))</span> <span class="c1"># (N, C, f1*f2, H, W)</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">npx</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="p">(</span><span class="o">-</span><span class="mi">2</span><span class="p">,</span> <span class="o">-</span><span class="mi">2</span><span class="p">,</span> <span class="o">-</span><span class="mi">6</span><span class="p">,</span> <span class="n">f1</span><span class="p">,</span> <span class="n">f2</span><span class="p">,</span> <span class="o">-</span><span class="mi">2</span><span class="p">,</span> <span class="o">-</span><span class="mi">2</span><span class="p">))</span> <span class="c1"># (N, C, f1, f2, H, W)</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">transpose</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">4</span><span class="p">,</span> <span class="mi">2</span><span class="p">,</span> <span class="mi">5</span><span class="p">,</span> <span class="mi">3</span><span class="p">))</span> <span class="c1"># (N, C, H, f1, W, f2)</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">npx</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="p">(</span><span class="o">-</span><span class="mi">2</span><span class="p">,</span> <span class="o">-</span><span class="mi">2</span><span class="p">,</span> <span class="o">-</span><span class="mi">5</span><span class="p">,</span> <span class="o">-</span><span class="mi">5</span><span class="p">))</span> <span class="c1"># (N, C, H*f1, W*f2)</span>
<span class="k">return</span> <span class="n">x</span></div>
<span class="k">def</span> <span class="fm">__repr__</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
<span class="k">return</span> <span class="s2">&quot;</span><span class="si">{}</span><span class="s2">(</span><span class="si">{}</span><span class="s2">)&quot;</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="vm">__class__</span><span class="o">.</span><span class="vm">__name__</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">_factors</span><span class="p">)</span></div>
<div class="viewcode-block" id="PixelShuffle3D"><a class="viewcode-back" href="../../../../api/gluon/nn/index.html#mxnet.gluon.nn.PixelShuffle3D">[docs]</a><span class="nd">@use_np</span>
<span class="k">class</span> <span class="nc">PixelShuffle3D</span><span class="p">(</span><span class="n">HybridBlock</span><span class="p">):</span>
<span class="w"> </span><span class="sa">r</span><span class="sd">&quot;&quot;&quot;Pixel-shuffle layer for upsampling in 3 dimensions.</span>
<span class="sd"> Pixel-shuffling (or voxel-shuffling in 3D) is the operation of taking</span>
<span class="sd"> groups of values along the *channel* dimension and regrouping them into</span>
<span class="sd"> blocks of voxels along the ``D``, ``H`` and ``W`` dimensions, thereby</span>
<span class="sd"> effectively multiplying those dimensions by a constant factor in size.</span>
<span class="sd"> For example, a feature map of shape :math:`(f^3 C, D, H, W)` is reshaped</span>
<span class="sd"> into :math:`(C, fD, fH, fW)` by forming little :math:`f \times f \times f`</span>
<span class="sd"> blocks of voxels and arranging them in a :math:`D \times H \times W` grid.</span>
<span class="sd"> Pixel-shuffling together with regular convolution is an alternative,</span>
<span class="sd"> learnable way of upsampling an image by arbitrary factors. It is reported</span>
<span class="sd"> to help overcome checkerboard artifacts that are common in upsampling with</span>
<span class="sd"> transposed convolutions (also called deconvolutions). See the paper</span>
<span class="sd"> `Real-Time Single Image and Video Super-Resolution Using an Efficient</span>
<span class="sd"> Sub-Pixel Convolutional Neural Network &lt;https://arxiv.org/abs/1609.05158&gt;`_</span>
<span class="sd"> for further details.</span>
<span class="sd"> Parameters</span>
<span class="sd"> ----------</span>
<span class="sd"> factor : int or 3-tuple of int</span>
<span class="sd"> Upsampling factors, applied to the ``D``, ``H`` and ``W``</span>
<span class="sd"> dimensions, in that order.</span>
<span class="sd"> Inputs:</span>
<span class="sd"> - **data**: Tensor of shape ``(N, f1*f2*f3*C, D, H, W)``.</span>
<span class="sd"> Outputs:</span>
<span class="sd"> - **out**: Tensor of shape ``(N, C, D*f1, H*f2, W*f3)``.</span>
<span class="sd"> Examples</span>
<span class="sd"> --------</span>
<span class="sd"> &gt;&gt;&gt; pxshuf = PixelShuffle3D((2, 3, 4))</span>
<span class="sd"> &gt;&gt;&gt; x = mx.np.zeros((1, 48, 3, 5, 7))</span>
<span class="sd"> &gt;&gt;&gt; pxshuf(x).shape</span>
<span class="sd"> (1, 2, 6, 15, 28)</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">factor</span><span class="p">):</span>
<span class="nb">super</span><span class="p">(</span><span class="n">PixelShuffle3D</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="fm">__init__</span><span class="p">()</span>
<span class="k">try</span><span class="p">:</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_factors</span> <span class="o">=</span> <span class="p">(</span><span class="nb">int</span><span class="p">(</span><span class="n">factor</span><span class="p">),)</span> <span class="o">*</span> <span class="mi">3</span>
<span class="k">except</span> <span class="ne">TypeError</span><span class="p">:</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_factors</span> <span class="o">=</span> <span class="nb">tuple</span><span class="p">(</span><span class="nb">int</span><span class="p">(</span><span class="n">fac</span><span class="p">)</span> <span class="k">for</span> <span class="n">fac</span> <span class="ow">in</span> <span class="n">factor</span><span class="p">)</span>
<span class="k">assert</span> <span class="nb">len</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">_factors</span><span class="p">)</span> <span class="o">==</span> <span class="mi">3</span><span class="p">,</span> <span class="s2">&quot;wrong length </span><span class="si">{}</span><span class="s2">&quot;</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">_factors</span><span class="p">))</span>
<div class="viewcode-block" id="PixelShuffle3D.forward"><a class="viewcode-back" href="../../../../api/gluon/nn/index.html#mxnet.gluon.nn.PixelShuffle3D.forward">[docs]</a> <span class="k">def</span> <span class="nf">forward</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">x</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;Perform pixel-shuffling on the input.&quot;&quot;&quot;</span>
<span class="c1"># `transpose` doesn&#39;t support 8D, need other implementation</span>
<span class="n">f1</span><span class="p">,</span> <span class="n">f2</span><span class="p">,</span> <span class="n">f3</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_factors</span>
<span class="c1"># (N, C*f1*f2*f3, D, H, W)</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">npx</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="p">(</span><span class="o">-</span><span class="mi">2</span><span class="p">,</span> <span class="o">-</span><span class="mi">6</span><span class="p">,</span> <span class="o">-</span><span class="mi">1</span><span class="p">,</span> <span class="n">f1</span> <span class="o">*</span> <span class="n">f2</span> <span class="o">*</span> <span class="n">f3</span><span class="p">,</span> <span class="o">-</span><span class="mi">2</span><span class="p">,</span> <span class="o">-</span><span class="mi">2</span><span class="p">,</span> <span class="o">-</span><span class="mi">2</span><span class="p">))</span> <span class="c1"># (N, C, f1*f2*f3, D, H, W)</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">swapaxes</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">)</span> <span class="c1"># (N, C, D, f1*f2*f3, H, W)</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">npx</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="p">(</span><span class="o">-</span><span class="mi">2</span><span class="p">,</span> <span class="o">-</span><span class="mi">2</span><span class="p">,</span> <span class="o">-</span><span class="mi">2</span><span class="p">,</span> <span class="o">-</span><span class="mi">6</span><span class="p">,</span> <span class="n">f1</span><span class="p">,</span> <span class="n">f2</span><span class="o">*</span><span class="n">f3</span><span class="p">,</span> <span class="o">-</span><span class="mi">2</span><span class="p">,</span> <span class="o">-</span><span class="mi">2</span><span class="p">))</span> <span class="c1"># (N, C, D, f1, f2*f3, H, W)</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">npx</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="p">(</span><span class="o">-</span><span class="mi">2</span><span class="p">,</span> <span class="o">-</span><span class="mi">2</span><span class="p">,</span> <span class="o">-</span><span class="mi">5</span><span class="p">,</span> <span class="o">-</span><span class="mi">2</span><span class="p">,</span> <span class="o">-</span><span class="mi">2</span><span class="p">,</span> <span class="o">-</span><span class="mi">2</span><span class="p">))</span> <span class="c1"># (N, C, D*f1, f2*f3, H, W)</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">swapaxes</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">4</span><span class="p">)</span> <span class="c1"># (N, C, D*f1, H, f2*f3, W)</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">npx</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="p">(</span><span class="o">-</span><span class="mi">2</span><span class="p">,</span> <span class="o">-</span><span class="mi">2</span><span class="p">,</span> <span class="o">-</span><span class="mi">2</span><span class="p">,</span> <span class="o">-</span><span class="mi">2</span><span class="p">,</span> <span class="o">-</span><span class="mi">6</span><span class="p">,</span> <span class="n">f2</span><span class="p">,</span> <span class="n">f3</span><span class="p">,</span> <span class="o">-</span><span class="mi">2</span><span class="p">))</span> <span class="c1"># (N, C, D*f1, H, f2, f3, W)</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">npx</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="p">(</span><span class="o">-</span><span class="mi">2</span><span class="p">,</span> <span class="o">-</span><span class="mi">2</span><span class="p">,</span> <span class="o">-</span><span class="mi">2</span><span class="p">,</span> <span class="o">-</span><span class="mi">5</span><span class="p">,</span> <span class="o">-</span><span class="mi">2</span><span class="p">,</span> <span class="o">-</span><span class="mi">2</span><span class="p">))</span> <span class="c1"># (N, C, D*f1, H*f2, f3, W)</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">swapaxes</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="mi">4</span><span class="p">,</span> <span class="mi">5</span><span class="p">)</span> <span class="c1"># (N, C, D*f1, H*f2, W, f3)</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">npx</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="p">(</span><span class="o">-</span><span class="mi">2</span><span class="p">,</span> <span class="o">-</span><span class="mi">2</span><span class="p">,</span> <span class="o">-</span><span class="mi">2</span><span class="p">,</span> <span class="o">-</span><span class="mi">2</span><span class="p">,</span> <span class="o">-</span><span class="mi">5</span><span class="p">))</span> <span class="c1"># (N, C, D*f1, H*f2, W*f3)</span>
<span class="k">return</span> <span class="n">x</span></div>
<span class="k">def</span> <span class="fm">__repr__</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
<span class="k">return</span> <span class="s2">&quot;</span><span class="si">{}</span><span class="s2">(</span><span class="si">{}</span><span class="s2">)&quot;</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="vm">__class__</span><span class="o">.</span><span class="vm">__name__</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">_factors</span><span class="p">)</span></div>
</pre></div>
<hr class="feedback-hr-top" />
<div class="feedback-container">
<div class="feedback-question">Did this page help you?</div>
<div class="feedback-answer-container">
<div class="feedback-answer yes-link" data-response="yes">Yes</div>
<div class="feedback-answer no-link" data-response="no">No</div>
</div>
<div class="feedback-thank-you">Thanks for your feedback!</div>
</div>
<hr class="feedback-hr-bottom" />
</div>
<div class="side-doc-outline">
<div class="side-doc-outline--content">
</div>
</div>
<div class="clearer"></div>
</div><div class="pagenation">
</div>
<footer class="site-footer h-card">
<div class="wrapper">
<div class="row">
<div class="col-4">
<h4 class="footer-category-title">Resources</h4>
<ul class="contact-list">
<li><a href="https://lists.apache.org/list.html?dev@mxnet.apache.org">Mailing list</a> <a class="u-email" href="mailto:dev-subscribe@mxnet.apache.org">(subscribe)</a></li>
<li><a href="https://discuss.mxnet.io">MXNet Discuss forum</a></li>
<li><a href="https://github.com/apache/mxnet/issues">Github Issues</a></li>
<li><a href="https://github.com/apache/mxnet/projects">Projects</a></li>
<li><a href="https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+Home">Developer Wiki</a></li>
<li><a href="/community">Contribute To MXNet</a></li>
</ul>
</div>
<div class="col-4"><ul class="social-media-list"><li><a href="https://github.com/apache/mxnet"><svg class="svg-icon"><use xlink:href="../../../../_static/minima-social-icons.svg#github"></use></svg> <span class="username">apache/mxnet</span></a></li><li><a href="https://www.twitter.com/apachemxnet"><svg class="svg-icon"><use xlink:href="../../../../_static/minima-social-icons.svg#twitter"></use></svg> <span class="username">apachemxnet</span></a></li><li><a href="https://youtube.com/apachemxnet"><svg class="svg-icon"><use xlink:href="../../../../_static/minima-social-icons.svg#youtube"></use></svg> <span class="username">apachemxnet</span></a></li></ul>
</div>
<div class="col-4 footer-text">
<p>A flexible and efficient library for deep learning.</p>
</div>
</div>
</div>
</footer>
<footer class="site-footer2">
<div class="wrapper">
<div class="row">
<div class="col-3">
<img src="../../../../_static/apache_incubator_logo.png" class="footer-logo col-2">
</div>
<div class="footer-bottom-warning col-9">
<p>Apache MXNet is an effort undergoing incubation at <a href="http://www.apache.org/">The Apache Software Foundation</a> (ASF), <span style="font-weight:bold">sponsored by the <i>Apache Incubator</i></span>. Incubation is required
of all newly accepted projects until a further review indicates that the infrastructure,
communications, and decision making process have stabilized in a manner consistent with other
successful ASF projects. While incubation status is not necessarily a reflection of the completeness
or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF.
</p><p>"Copyright © 2017-2018, The Apache Software Foundation Apache MXNet, MXNet, Apache, the Apache
feather, and the Apache MXNet project logo are either registered trademarks or trademarks of the
Apache Software Foundation."</p>
</div>
</div>
</div>
</footer>
</body>
</html>