| <!DOCTYPE html> |
| |
| <html xmlns="http://www.w3.org/1999/xhtml"> |
| <head> |
| <meta charset="utf-8" /> |
| <meta charset="utf-8"> |
| <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> |
| <meta http-equiv="x-ua-compatible" content="ie=edge"> |
| <style> |
| .dropdown { |
| position: relative; |
| display: inline-block; |
| } |
| |
| .dropdown-content { |
| display: none; |
| position: absolute; |
| background-color: #f9f9f9; |
| min-width: 160px; |
| box-shadow: 0px 8px 16px 0px rgba(0,0,0,0.2); |
| padding: 12px 16px; |
| z-index: 1; |
| text-align: left; |
| } |
| |
| .dropdown:hover .dropdown-content { |
| display: block; |
| } |
| |
| .dropdown-option:hover { |
| color: #FF4500; |
| } |
| |
| .dropdown-option-active { |
| color: #FF4500; |
| font-weight: lighter; |
| } |
| |
| .dropdown-option { |
| color: #000000; |
| font-weight: lighter; |
| } |
| |
| .dropdown-header { |
| color: #FFFFFF; |
| display: inline-flex; |
| } |
| |
| .dropdown-caret { |
| width: 18px; |
| } |
| |
| .dropdown-caret-path { |
| fill: #FFFFFF; |
| } |
| </style> |
| |
| <title>gluon.nn — Apache MXNet documentation</title> |
| |
| <link rel="stylesheet" href="../../../_static/basic.css" type="text/css" /> |
| <link rel="stylesheet" href="../../../_static/pygments.css" type="text/css" /> |
| <link rel="stylesheet" type="text/css" href="../../../_static/mxnet.css" /> |
| <link rel="stylesheet" href="../../../_static/material-design-lite-1.3.0/material.blue-deep_orange.min.css" type="text/css" /> |
| <link rel="stylesheet" href="../../../_static/sphinx_materialdesign_theme.css" type="text/css" /> |
| <link rel="stylesheet" href="../../../_static/fontawesome/all.css" type="text/css" /> |
| <link rel="stylesheet" href="../../../_static/fonts.css" type="text/css" /> |
| <link rel="stylesheet" href="../../../_static/feedback.css" type="text/css" /> |
| <script id="documentation_options" data-url_root="../../../" src="../../../_static/documentation_options.js"></script> |
| <script src="../../../_static/jquery.js"></script> |
| <script src="../../../_static/underscore.js"></script> |
| <script src="../../../_static/doctools.js"></script> |
| <script src="../../../_static/language_data.js"></script> |
| <script src="../../../_static/google_analytics.js"></script> |
| <script src="../../../_static/autodoc.js"></script> |
| <script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script> |
| <script async="async" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.5/latest.js?config=TeX-AMS-MML_HTMLorMML"></script> |
| <script type="text/x-mathjax-config">MathJax.Hub.Config({"tex2jax": {"inlineMath": [["$", "$"], ["\\(", "\\)"]], "processEscapes": true, "ignoreClass": "document", "processClass": "math|output_area"}})</script> |
| <link rel="shortcut icon" href="../../../_static/mxnet-icon.png"/> |
| <link rel="index" title="Index" href="../../../genindex.html" /> |
| <link rel="search" title="Search" href="../../../search.html" /> |
| <link rel="next" title="gluon.rnn" href="../rnn/index.html" /> |
| <link rel="prev" title="gluon.model_zoo.vision" href="../model_zoo/index.html" /> |
| </head> |
| <body><header class="site-header" role="banner"> |
| <div class="wrapper"> |
| <a class="site-title" rel="author" href="/versions/1.8.0/"><img |
| src="../../../_static/mxnet_logo.png" class="site-header-logo"></a> |
| <nav class="site-nav"> |
| <input type="checkbox" id="nav-trigger" class="nav-trigger"/> |
| <label for="nav-trigger"> |
| <span class="menu-icon"> |
| <svg viewBox="0 0 18 15" width="18px" height="15px"> |
| <path d="M18,1.484c0,0.82-0.665,1.484-1.484,1.484H1.484C0.665,2.969,0,2.304,0,1.484l0,0C0,0.665,0.665,0,1.484,0 h15.032C17.335,0,18,0.665,18,1.484L18,1.484z M18,7.516C18,8.335,17.335,9,16.516,9H1.484C0.665,9,0,8.335,0,7.516l0,0 c0-0.82,0.665-1.484,1.484-1.484h15.032C17.335,6.031,18,6.696,18,7.516L18,7.516z M18,13.516C18,14.335,17.335,15,16.516,15H1.484 C0.665,15,0,14.335,0,13.516l0,0c0-0.82,0.665-1.483,1.484-1.483h15.032C17.335,12.031,18,12.695,18,13.516L18,13.516z"/> |
| </svg> |
| </span> |
| </label> |
| |
| <div class="trigger"> |
| <a class="page-link" href="/versions/1.8.0/get_started">Get Started</a> |
| <a class="page-link" href="/versions/1.8.0/blog">Blog</a> |
| <a class="page-link" href="/versions/1.8.0/features">Features</a> |
| <a class="page-link" href="/versions/1.8.0/ecosystem">Ecosystem</a> |
| <a class="page-link page-current" href="/versions/1.8.0/api">Docs & Tutorials</a> |
| <a class="page-link" href="https://github.com/apache/incubator-mxnet">GitHub</a> |
| <div class="dropdown"> |
| <span class="dropdown-header">1.8.0 |
| <svg class="dropdown-caret" viewBox="0 0 32 32" class="icon icon-caret-bottom" aria-hidden="true"><path class="dropdown-caret-path" d="M24 11.305l-7.997 11.39L8 11.305z"></path></svg> |
| </span> |
| <div class="dropdown-content"> |
| <a class="dropdown-option" href="/">master</a><br> |
| <a class="dropdown-option-active" href="/versions/1.8.0/">1.8.0</a><br> |
| <a class="dropdown-option" href="/versions/1.7.0/">1.7.0</a><br> |
| <a class="dropdown-option" href="/versions/1.6.0/">1.6.0</a><br> |
| <a class="dropdown-option" href="/versions/1.5.0/">1.5.0</a><br> |
| <a class="dropdown-option" href="/versions/1.4.1/">1.4.1</a><br> |
| <a class="dropdown-option" href="/versions/1.3.1/">1.3.1</a><br> |
| <a class="dropdown-option" href="/versions/1.2.1/">1.2.1</a><br> |
| <a class="dropdown-option" href="/versions/1.1.0/">1.1.0</a><br> |
| <a class="dropdown-option" href="/versions/1.0.0/">1.0.0</a><br> |
| <a class="dropdown-option" href="/versions/0.12.1/">0.12.1</a><br> |
| <a class="dropdown-option" href="/versions/0.11.0/">0.11.0</a> |
| </div> |
| </div> |
| </div> |
| </nav> |
| </div> |
| </header> |
| <div class="mdl-layout mdl-js-layout mdl-layout--fixed-header mdl-layout--fixed-drawer"><header class="mdl-layout__header mdl-layout__header--waterfall "> |
| <div class="mdl-layout__header-row"> |
| |
| <nav class="mdl-navigation breadcrumb"> |
| <a class="mdl-navigation__link" href="../../index.html">Python API</a><i class="material-icons">navigate_next</i> |
| <a class="mdl-navigation__link" href="../index.html">mxnet.gluon</a><i class="material-icons">navigate_next</i> |
| <a class="mdl-navigation__link is-active">gluon.nn</a> |
| </nav> |
| <div class="mdl-layout-spacer"></div> |
| <nav class="mdl-navigation"> |
| |
| <form class="form-inline pull-sm-right" action="../../../search.html" method="get"> |
| <div class="mdl-textfield mdl-js-textfield mdl-textfield--expandable mdl-textfield--floating-label mdl-textfield--align-right"> |
| <label id="quick-search-icon" class="mdl-button mdl-js-button mdl-button--icon" for="waterfall-exp"> |
| <i class="material-icons">search</i> |
| </label> |
| <div class="mdl-textfield__expandable-holder"> |
| <input class="mdl-textfield__input" type="text" name="q" id="waterfall-exp" placeholder="Search" /> |
| <input type="hidden" name="check_keywords" value="yes" /> |
| <input type="hidden" name="area" value="default" /> |
| </div> |
| </div> |
| <div class="mdl-tooltip" data-mdl-for="quick-search-icon"> |
| Quick search |
| </div> |
| </form> |
| |
| <a id="button-show-source" |
| class="mdl-button mdl-js-button mdl-button--icon" |
| href="../../../_sources/api/gluon/nn/index.rst" rel="nofollow"> |
| <i class="material-icons">code</i> |
| </a> |
| <div class="mdl-tooltip" data-mdl-for="button-show-source"> |
| Show Source |
| </div> |
| </nav> |
| </div> |
| <div class="mdl-layout__header-row header-links"> |
| <div class="mdl-layout-spacer"></div> |
| <nav class="mdl-navigation"> |
| </nav> |
| </div> |
| </header><header class="mdl-layout__drawer"> |
| |
| <div class="globaltoc"> |
| <span class="mdl-layout-title toc">Table Of Contents</span> |
| |
| |
| |
| <nav class="mdl-navigation"> |
| <ul class="current"> |
| <li class="toctree-l1"><a class="reference internal" href="../../../tutorials/index.html">Python Tutorials</a><ul> |
| <li class="toctree-l2"><a class="reference internal" href="../../../tutorials/getting-started/index.html">Getting Started</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/index.html">Crash Course</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/1-ndarray.html">Manipulate data with <code class="docutils literal notranslate"><span class="pre">ndarray</span></code></a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/2-nn.html">Create a neural network</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/3-autograd.html">Automatic differentiation with <code class="docutils literal notranslate"><span class="pre">autograd</span></code></a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/4-train.html">Train the neural network</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/5-predict.html">Predict with a pre-trained model</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/6-use_gpus.html">Use GPUs</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/getting-started/to-mxnet/index.html">Moving to MXNet from Other Frameworks</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/to-mxnet/pytorch.html">PyTorch vs Apache MXNet</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/getting-started/gluon_from_experiment_to_deployment.html">Gluon: from experiment to deployment</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/getting-started/logistic_regression_explained.html">Logistic regression explained</a></li> |
| <li class="toctree-l3"><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/image/mnist.html">MNIST</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../../tutorials/packages/index.html">Packages</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/autograd/index.html">Automatic Differentiation</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/gluon/index.html">Gluon</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/index.html">Blocks</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/custom-layer.html">Custom Layers</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/custom_layer_beginners.html">Customer Layers (Beginners)</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/hybridize.html">Hybridize</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/init.html">Initialization</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/naming.html">Parameter and Block Naming</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/nn.html">Layers and Blocks</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/parameters.html">Parameter Management</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/activations/activations.html">Activation Blocks</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/data/index.html">Data Tutorials</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/data_augmentation.html">Image Augmentation</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/data_augmentation.html#Spatial-Augmentation">Spatial Augmentation</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/data_augmentation.html#Color-Augmentation">Color Augmentation</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/data_augmentation.html#Composed-Augmentations">Composed Augmentations</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/datasets.html">Gluon <code class="docutils literal notranslate"><span class="pre">Dataset</span></code>s and <code class="docutils literal notranslate"><span class="pre">DataLoader</span></code></a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/datasets.html#Using-own-data-with-included-Datasets">Using own data with included <code class="docutils literal notranslate"><span class="pre">Dataset</span></code>s</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/datasets.html#Using-own-data-with-custom-Datasets">Using own data with custom <code class="docutils literal notranslate"><span class="pre">Dataset</span></code>s</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/datasets.html#Appendix:-Upgrading-from-Module-DataIter-to-Gluon-DataLoader">Appendix: Upgrading from Module <code class="docutils literal notranslate"><span class="pre">DataIter</span></code> to Gluon <code class="docutils literal notranslate"><span class="pre">DataLoader</span></code></a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/image/index.html">Image Tutorials</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/image/image-augmentation.html">Image Augmentation</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/image/info_gan.html">Image similarity search with InfoGAN</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/image/mnist.html">Handwritten Digit Recognition</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/image/pretrained_models.html">Using pre-trained models in MXNet</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/loss/index.html">Losses</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/loss/custom-loss.html">Custom Loss Blocks</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/loss/kl_divergence.html">Kullback-Leibler (KL) Divergence</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/loss/loss.html">Loss functions</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/text/index.html">Text Tutorials</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/text/gnmt.html">Google Neural Machine Translation</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/text/transformer.html">Machine Translation with Transformer</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/training/index.html">Training</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/training/fit_api_tutorial.html">MXNet Gluon Fit API</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/training/trainer.html">Trainer</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/training/learning_rates/index.html">Learning Rates</a><ul> |
| <li class="toctree-l6"><a class="reference internal" href="../../../tutorials/packages/gluon/training/learning_rates/learning_rate_finder.html">Learning Rate Finder</a></li> |
| <li class="toctree-l6"><a class="reference internal" href="../../../tutorials/packages/gluon/training/learning_rates/learning_rate_schedules.html">Learning Rate Schedules</a></li> |
| <li class="toctree-l6"><a class="reference internal" href="../../../tutorials/packages/gluon/training/learning_rates/learning_rate_schedules_advanced.html">Advanced Learning Rate Schedules</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/training/normalization/index.html">Normalization Blocks</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/kvstore/index.html">KVStore</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/kvstore/kvstore.html">Distributed Key-Value Store</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/ndarray/index.html">NDArray</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/ndarray/01-ndarray-intro.html">An Intro: Manipulate Data the MXNet Way with NDArray</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/ndarray/02-ndarray-operations.html">NDArray Operations</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/ndarray/03-ndarray-contexts.html">NDArray Contexts</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/ndarray/gotchas_numpy_in_mxnet.html">Gotchas using NumPy in Apache MXNet</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/ndarray/sparse/index.html">Tutorials</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/ndarray/sparse/csr.html">CSRNDArray - NDArray in Compressed Sparse Row Storage Format</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/ndarray/sparse/row_sparse.html">RowSparseNDArray - NDArray for Sparse Gradient Updates</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/ndarray/sparse/train.html">Train a Linear Regression Model with Sparse Symbols</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/ndarray/sparse/train_gluon.html">Sparse NDArrays with Gluon</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/onnx/index.html">ONNX</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/onnx/fine_tuning_gluon.html">Fine-tuning an ONNX model</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/onnx/inference_on_onnx_model.html">Running inference on MXNet/Gluon from an ONNX model</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/onnx/super_resolution.html">Importing an ONNX model into MXNet</a></li> |
| <li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/deploy/export/onnx.html">Export ONNX Models</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/optimizer/index.html">Optimizers</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/viz/index.html">Visualization</a><ul> |
| <li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/faq/visualize_graph">Visualize networks</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../../tutorials/performance/index.html">Performance</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/performance/compression/index.html">Compression</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/compression/int8.html">Deploy with int-8</a></li> |
| <li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/faq/float16">Float16</a></li> |
| <li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/faq/gradient_compression">Gradient Compression</a></li> |
| <li class="toctree-l4"><a class="reference external" href="https://gluon-cv.mxnet.io/build/examples_deployment/int8_inference.html">GluonCV with Quantized Models</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/performance/backend/index.html">Accelerated Backend Tools</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/backend/mkldnn/index.html">Intel MKL-DNN</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/performance/backend/mkldnn/mkldnn_quantization.html">Quantize with MKL-DNN backend</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/performance/backend/mkldnn/mkldnn_readme.html">Install MXNet with MKL-DNN</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/backend/tensorrt/index.html">TensorRT</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/performance/backend/tensorrt/tensorrt.html">Optimizing Deep Learning Computation Graphs with TensorRT</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/backend/tvm.html">Use TVM</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/backend/profiler.html">Profiling MXNet Models</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/backend/amp.html">Using AMP: Automatic Mixed Precision</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../../tutorials/deploy/index.html">Deployment</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/deploy/export/index.html">Export</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/export/onnx.html">Exporting to ONNX format</a></li> |
| <li class="toctree-l4"><a class="reference external" href="https://gluon-cv.mxnet.io/build/examples_deployment/export_network.html">Export Gluon CV Models</a></li> |
| <li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Save / Load Parameters</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/deploy/inference/index.html">Inference</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/inference/cpp.html">Deploy into C++</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/inference/image_classification_jetson.html">Image Classication using pretrained ResNet-50 model on Jetson module</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/inference/scala.html">Deploy into a Java or Scala Environment</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/inference/wine_detector.html">Real-time Object Detection with MXNet On The Raspberry Pi</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/deploy/run-on-aws/index.html">Run on AWS</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/run-on-aws/use_ec2.html">Run on an EC2 Instance</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/run-on-aws/use_sagemaker.html">Run on Amazon SageMaker</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/run-on-aws/cloud.html">MXNet on the Cloud</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../../tutorials/extend/index.html">Extend</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/extend/custom_layer.html">Custom Layers</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/extend/customop.html">Custom Numpy Operators</a></li> |
| <li class="toctree-l3"><a class="reference external" href="https://mxnet.apache.org/api/faq/new_op">New Operator Creation</a></li> |
| <li class="toctree-l3"><a class="reference external" href="https://mxnet.apache.org/api/faq/add_op_in_backend">New Operator in MXNet Backend</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l1 current"><a class="reference internal" href="../../index.html">Python API</a><ul class="current"> |
| <li class="toctree-l2"><a class="reference internal" href="../../ndarray/index.html">mxnet.ndarray</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/ndarray.html">ndarray</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/contrib/index.html">ndarray.contrib</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/image/index.html">ndarray.image</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/linalg/index.html">ndarray.linalg</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/op/index.html">ndarray.op</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/random/index.html">ndarray.random</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/register/index.html">ndarray.register</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/sparse/index.html">ndarray.sparse</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/utils/index.html">ndarray.utils</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l2 current"><a class="reference internal" href="../index.html">mxnet.gluon</a><ul class="current"> |
| <li class="toctree-l3"><a class="reference internal" href="../block.html">gluon.Block</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../hybrid_block.html">gluon.HybridBlock</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../symbol_block.html">gluon.SymbolBlock</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../constant.html">gluon.Constant</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../parameter.html">gluon.Parameter</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../parameter_dict.html">gluon.ParameterDict</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../trainer.html">gluon.Trainer</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../contrib/index.html">gluon.contrib</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../data/index.html">gluon.data</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../data/vision/index.html">data.vision</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../data/vision/datasets/index.html">vision.datasets</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../data/vision/transforms/index.html">vision.transforms</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../loss/index.html">gluon.loss</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../model_zoo/index.html">gluon.model_zoo.vision</a></li> |
| <li class="toctree-l3 current"><a class="current reference internal" href="#">gluon.nn</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../rnn/index.html">gluon.rnn</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../utils/index.html">gluon.utils</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../autograd/index.html">mxnet.autograd</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../initializer/index.html">mxnet.initializer</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../optimizer/index.html">mxnet.optimizer</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../lr_scheduler/index.html">mxnet.lr_scheduler</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../metric/index.html">mxnet.metric</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../kvstore/index.html">mxnet.kvstore</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../symbol/index.html">mxnet.symbol</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/symbol.html">symbol</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/contrib/index.html">symbol.contrib</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/image/index.html">symbol.image</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/linalg/index.html">symbol.linalg</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/op/index.html">symbol.op</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/random/index.html">symbol.random</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/register/index.html">symbol.register</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/sparse/index.html">symbol.sparse</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../module/index.html">mxnet.module</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../contrib/index.html">mxnet.contrib</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/autograd/index.html">contrib.autograd</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/io/index.html">contrib.io</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/ndarray/index.html">contrib.ndarray</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/onnx/index.html">contrib.onnx</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/quantization/index.html">contrib.quantization</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/symbol/index.html">contrib.symbol</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/tensorboard/index.html">contrib.tensorboard</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/tensorrt/index.html">contrib.tensorrt</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/text/index.html">contrib.text</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../mxnet/index.html">mxnet</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/attribute/index.html">mxnet.attribute</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/base/index.html">mxnet.base</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/callback/index.html">mxnet.callback</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/context/index.html">mxnet.context</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/engine/index.html">mxnet.engine</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/executor/index.html">mxnet.executor</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/executor_manager/index.html">mxnet.executor_manager</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/image/index.html">mxnet.image</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/io/index.html">mxnet.io</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/kvstore_server/index.html">mxnet.kvstore_server</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/libinfo/index.html">mxnet.libinfo</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/log/index.html">mxnet.log</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/model/index.html">mxnet.model</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/monitor/index.html">mxnet.monitor</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/name/index.html">mxnet.name</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/notebook/index.html">mxnet.notebook</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/operator/index.html">mxnet.operator</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/profiler/index.html">mxnet.profiler</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/random/index.html">mxnet.random</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/recordio/index.html">mxnet.recordio</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/registry/index.html">mxnet.registry</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/rtc/index.html">mxnet.rtc</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/runtime/index.html">mxnet.runtime</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/test_utils/index.html">mxnet.test_utils</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/torch/index.html">mxnet.torch</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/util/index.html">mxnet.util</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/visualization/index.html">mxnet.visualization</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| </ul> |
| |
| </nav> |
| |
| </div> |
| |
| </header> |
| <main class="mdl-layout__content" tabIndex="0"> |
| |
| <script type="text/javascript" src="../../../_static/sphinx_materialdesign_theme.js "></script> |
| <script type="text/javascript" src="../../../_static/feedback.js"></script> |
| <header class="mdl-layout__drawer"> |
| |
| <div class="globaltoc"> |
| <span class="mdl-layout-title toc">Table Of Contents</span> |
| |
| |
| |
| <nav class="mdl-navigation"> |
| <ul class="current"> |
| <li class="toctree-l1"><a class="reference internal" href="../../../tutorials/index.html">Python Tutorials</a><ul> |
| <li class="toctree-l2"><a class="reference internal" href="../../../tutorials/getting-started/index.html">Getting Started</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/index.html">Crash Course</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/1-ndarray.html">Manipulate data with <code class="docutils literal notranslate"><span class="pre">ndarray</span></code></a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/2-nn.html">Create a neural network</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/3-autograd.html">Automatic differentiation with <code class="docutils literal notranslate"><span class="pre">autograd</span></code></a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/4-train.html">Train the neural network</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/5-predict.html">Predict with a pre-trained model</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/6-use_gpus.html">Use GPUs</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/getting-started/to-mxnet/index.html">Moving to MXNet from Other Frameworks</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/to-mxnet/pytorch.html">PyTorch vs Apache MXNet</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/getting-started/gluon_from_experiment_to_deployment.html">Gluon: from experiment to deployment</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/getting-started/logistic_regression_explained.html">Logistic regression explained</a></li> |
| <li class="toctree-l3"><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/image/mnist.html">MNIST</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../../tutorials/packages/index.html">Packages</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/autograd/index.html">Automatic Differentiation</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/gluon/index.html">Gluon</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/index.html">Blocks</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/custom-layer.html">Custom Layers</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/custom_layer_beginners.html">Customer Layers (Beginners)</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/hybridize.html">Hybridize</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/init.html">Initialization</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/naming.html">Parameter and Block Naming</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/nn.html">Layers and Blocks</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/parameters.html">Parameter Management</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/activations/activations.html">Activation Blocks</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/data/index.html">Data Tutorials</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/data_augmentation.html">Image Augmentation</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/data_augmentation.html#Spatial-Augmentation">Spatial Augmentation</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/data_augmentation.html#Color-Augmentation">Color Augmentation</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/data_augmentation.html#Composed-Augmentations">Composed Augmentations</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/datasets.html">Gluon <code class="docutils literal notranslate"><span class="pre">Dataset</span></code>s and <code class="docutils literal notranslate"><span class="pre">DataLoader</span></code></a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/datasets.html#Using-own-data-with-included-Datasets">Using own data with included <code class="docutils literal notranslate"><span class="pre">Dataset</span></code>s</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/datasets.html#Using-own-data-with-custom-Datasets">Using own data with custom <code class="docutils literal notranslate"><span class="pre">Dataset</span></code>s</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/datasets.html#Appendix:-Upgrading-from-Module-DataIter-to-Gluon-DataLoader">Appendix: Upgrading from Module <code class="docutils literal notranslate"><span class="pre">DataIter</span></code> to Gluon <code class="docutils literal notranslate"><span class="pre">DataLoader</span></code></a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/image/index.html">Image Tutorials</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/image/image-augmentation.html">Image Augmentation</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/image/info_gan.html">Image similarity search with InfoGAN</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/image/mnist.html">Handwritten Digit Recognition</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/image/pretrained_models.html">Using pre-trained models in MXNet</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/loss/index.html">Losses</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/loss/custom-loss.html">Custom Loss Blocks</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/loss/kl_divergence.html">Kullback-Leibler (KL) Divergence</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/loss/loss.html">Loss functions</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/text/index.html">Text Tutorials</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/text/gnmt.html">Google Neural Machine Translation</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/text/transformer.html">Machine Translation with Transformer</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/training/index.html">Training</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/training/fit_api_tutorial.html">MXNet Gluon Fit API</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/training/trainer.html">Trainer</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/training/learning_rates/index.html">Learning Rates</a><ul> |
| <li class="toctree-l6"><a class="reference internal" href="../../../tutorials/packages/gluon/training/learning_rates/learning_rate_finder.html">Learning Rate Finder</a></li> |
| <li class="toctree-l6"><a class="reference internal" href="../../../tutorials/packages/gluon/training/learning_rates/learning_rate_schedules.html">Learning Rate Schedules</a></li> |
| <li class="toctree-l6"><a class="reference internal" href="../../../tutorials/packages/gluon/training/learning_rates/learning_rate_schedules_advanced.html">Advanced Learning Rate Schedules</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/training/normalization/index.html">Normalization Blocks</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/kvstore/index.html">KVStore</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/kvstore/kvstore.html">Distributed Key-Value Store</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/ndarray/index.html">NDArray</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/ndarray/01-ndarray-intro.html">An Intro: Manipulate Data the MXNet Way with NDArray</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/ndarray/02-ndarray-operations.html">NDArray Operations</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/ndarray/03-ndarray-contexts.html">NDArray Contexts</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/ndarray/gotchas_numpy_in_mxnet.html">Gotchas using NumPy in Apache MXNet</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/ndarray/sparse/index.html">Tutorials</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/ndarray/sparse/csr.html">CSRNDArray - NDArray in Compressed Sparse Row Storage Format</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/ndarray/sparse/row_sparse.html">RowSparseNDArray - NDArray for Sparse Gradient Updates</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/ndarray/sparse/train.html">Train a Linear Regression Model with Sparse Symbols</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/ndarray/sparse/train_gluon.html">Sparse NDArrays with Gluon</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/onnx/index.html">ONNX</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/onnx/fine_tuning_gluon.html">Fine-tuning an ONNX model</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/onnx/inference_on_onnx_model.html">Running inference on MXNet/Gluon from an ONNX model</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/onnx/super_resolution.html">Importing an ONNX model into MXNet</a></li> |
| <li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/deploy/export/onnx.html">Export ONNX Models</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/optimizer/index.html">Optimizers</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/viz/index.html">Visualization</a><ul> |
| <li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/faq/visualize_graph">Visualize networks</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../../tutorials/performance/index.html">Performance</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/performance/compression/index.html">Compression</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/compression/int8.html">Deploy with int-8</a></li> |
| <li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/faq/float16">Float16</a></li> |
| <li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/faq/gradient_compression">Gradient Compression</a></li> |
| <li class="toctree-l4"><a class="reference external" href="https://gluon-cv.mxnet.io/build/examples_deployment/int8_inference.html">GluonCV with Quantized Models</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/performance/backend/index.html">Accelerated Backend Tools</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/backend/mkldnn/index.html">Intel MKL-DNN</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/performance/backend/mkldnn/mkldnn_quantization.html">Quantize with MKL-DNN backend</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/performance/backend/mkldnn/mkldnn_readme.html">Install MXNet with MKL-DNN</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/backend/tensorrt/index.html">TensorRT</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/performance/backend/tensorrt/tensorrt.html">Optimizing Deep Learning Computation Graphs with TensorRT</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/backend/tvm.html">Use TVM</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/backend/profiler.html">Profiling MXNet Models</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/backend/amp.html">Using AMP: Automatic Mixed Precision</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../../tutorials/deploy/index.html">Deployment</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/deploy/export/index.html">Export</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/export/onnx.html">Exporting to ONNX format</a></li> |
| <li class="toctree-l4"><a class="reference external" href="https://gluon-cv.mxnet.io/build/examples_deployment/export_network.html">Export Gluon CV Models</a></li> |
| <li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Save / Load Parameters</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/deploy/inference/index.html">Inference</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/inference/cpp.html">Deploy into C++</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/inference/image_classification_jetson.html">Image Classication using pretrained ResNet-50 model on Jetson module</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/inference/scala.html">Deploy into a Java or Scala Environment</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/inference/wine_detector.html">Real-time Object Detection with MXNet On The Raspberry Pi</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/deploy/run-on-aws/index.html">Run on AWS</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/run-on-aws/use_ec2.html">Run on an EC2 Instance</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/run-on-aws/use_sagemaker.html">Run on Amazon SageMaker</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/run-on-aws/cloud.html">MXNet on the Cloud</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../../tutorials/extend/index.html">Extend</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/extend/custom_layer.html">Custom Layers</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/extend/customop.html">Custom Numpy Operators</a></li> |
| <li class="toctree-l3"><a class="reference external" href="https://mxnet.apache.org/api/faq/new_op">New Operator Creation</a></li> |
| <li class="toctree-l3"><a class="reference external" href="https://mxnet.apache.org/api/faq/add_op_in_backend">New Operator in MXNet Backend</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l1 current"><a class="reference internal" href="../../index.html">Python API</a><ul class="current"> |
| <li class="toctree-l2"><a class="reference internal" href="../../ndarray/index.html">mxnet.ndarray</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/ndarray.html">ndarray</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/contrib/index.html">ndarray.contrib</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/image/index.html">ndarray.image</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/linalg/index.html">ndarray.linalg</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/op/index.html">ndarray.op</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/random/index.html">ndarray.random</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/register/index.html">ndarray.register</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/sparse/index.html">ndarray.sparse</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/utils/index.html">ndarray.utils</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l2 current"><a class="reference internal" href="../index.html">mxnet.gluon</a><ul class="current"> |
| <li class="toctree-l3"><a class="reference internal" href="../block.html">gluon.Block</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../hybrid_block.html">gluon.HybridBlock</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../symbol_block.html">gluon.SymbolBlock</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../constant.html">gluon.Constant</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../parameter.html">gluon.Parameter</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../parameter_dict.html">gluon.ParameterDict</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../trainer.html">gluon.Trainer</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../contrib/index.html">gluon.contrib</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../data/index.html">gluon.data</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../data/vision/index.html">data.vision</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../data/vision/datasets/index.html">vision.datasets</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../data/vision/transforms/index.html">vision.transforms</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../loss/index.html">gluon.loss</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../model_zoo/index.html">gluon.model_zoo.vision</a></li> |
| <li class="toctree-l3 current"><a class="current reference internal" href="#">gluon.nn</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../rnn/index.html">gluon.rnn</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../utils/index.html">gluon.utils</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../autograd/index.html">mxnet.autograd</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../initializer/index.html">mxnet.initializer</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../optimizer/index.html">mxnet.optimizer</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../lr_scheduler/index.html">mxnet.lr_scheduler</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../metric/index.html">mxnet.metric</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../kvstore/index.html">mxnet.kvstore</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../symbol/index.html">mxnet.symbol</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/symbol.html">symbol</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/contrib/index.html">symbol.contrib</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/image/index.html">symbol.image</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/linalg/index.html">symbol.linalg</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/op/index.html">symbol.op</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/random/index.html">symbol.random</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/register/index.html">symbol.register</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/sparse/index.html">symbol.sparse</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../module/index.html">mxnet.module</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../contrib/index.html">mxnet.contrib</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/autograd/index.html">contrib.autograd</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/io/index.html">contrib.io</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/ndarray/index.html">contrib.ndarray</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/onnx/index.html">contrib.onnx</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/quantization/index.html">contrib.quantization</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/symbol/index.html">contrib.symbol</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/tensorboard/index.html">contrib.tensorboard</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/tensorrt/index.html">contrib.tensorrt</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/text/index.html">contrib.text</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../mxnet/index.html">mxnet</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/attribute/index.html">mxnet.attribute</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/base/index.html">mxnet.base</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/callback/index.html">mxnet.callback</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/context/index.html">mxnet.context</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/engine/index.html">mxnet.engine</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/executor/index.html">mxnet.executor</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/executor_manager/index.html">mxnet.executor_manager</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/image/index.html">mxnet.image</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/io/index.html">mxnet.io</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/kvstore_server/index.html">mxnet.kvstore_server</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/libinfo/index.html">mxnet.libinfo</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/log/index.html">mxnet.log</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/model/index.html">mxnet.model</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/monitor/index.html">mxnet.monitor</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/name/index.html">mxnet.name</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/notebook/index.html">mxnet.notebook</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/operator/index.html">mxnet.operator</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/profiler/index.html">mxnet.profiler</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/random/index.html">mxnet.random</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/recordio/index.html">mxnet.recordio</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/registry/index.html">mxnet.registry</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/rtc/index.html">mxnet.rtc</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/runtime/index.html">mxnet.runtime</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/test_utils/index.html">mxnet.test_utils</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/torch/index.html">mxnet.torch</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/util/index.html">mxnet.util</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/visualization/index.html">mxnet.visualization</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| </ul> |
| |
| </nav> |
| |
| </div> |
| |
| </header> |
| |
| <div class="document"> |
| <div class="page-content" role="main"> |
| |
| <div class="section" id="gluon-nn"> |
| <h1>gluon.nn<a class="headerlink" href="#gluon-nn" title="Permalink to this headline">¶</a></h1> |
| <p>Gluon provides a large number of build-in neural network layers in the following |
| two modules:</p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#module-mxnet.gluon.nn" title="mxnet.gluon.nn"><code class="xref py py-obj docutils literal notranslate"><span class="pre">mxnet.gluon.nn</span></code></a></p></td> |
| <td><p>Neural network layers.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="../contrib/index.html#module-mxnet.gluon.contrib.nn" title="mxnet.gluon.contrib.nn"><code class="xref py py-obj docutils literal notranslate"><span class="pre">mxnet.gluon.contrib.nn</span></code></a></p></td> |
| <td><p>Contributed neural network modules.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <p>We group all layers in these two modules according to their categories.</p> |
| <div class="section" id="sequential-containers"> |
| <h2>Sequential containers<a class="headerlink" href="#sequential-containers" title="Permalink to this headline">¶</a></h2> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Sequential" title="mxnet.gluon.nn.Sequential"><code class="xref py py-obj docutils literal notranslate"><span class="pre">nn.Sequential</span></code></a></p></td> |
| <td><p>Stacks Blocks sequentially.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.HybridSequential" title="mxnet.gluon.nn.HybridSequential"><code class="xref py py-obj docutils literal notranslate"><span class="pre">nn.HybridSequential</span></code></a></p></td> |
| <td><p>Stacks HybridBlocks sequentially.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| </div> |
| <div class="section" id="basic-layers"> |
| <h2>Basic Layers<a class="headerlink" href="#basic-layers" title="Permalink to this headline">¶</a></h2> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Dense" title="mxnet.gluon.nn.Dense"><code class="xref py py-obj docutils literal notranslate"><span class="pre">nn.Dense</span></code></a></p></td> |
| <td><p>Just your regular densely-connected NN layer.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Activation" title="mxnet.gluon.nn.Activation"><code class="xref py py-obj docutils literal notranslate"><span class="pre">nn.Activation</span></code></a></p></td> |
| <td><p>Applies an activation function to input.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Dropout" title="mxnet.gluon.nn.Dropout"><code class="xref py py-obj docutils literal notranslate"><span class="pre">nn.Dropout</span></code></a></p></td> |
| <td><p>Applies Dropout to the input.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Flatten" title="mxnet.gluon.nn.Flatten"><code class="xref py py-obj docutils literal notranslate"><span class="pre">nn.Flatten</span></code></a></p></td> |
| <td><p>Flattens the input to two dimensional.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Lambda" title="mxnet.gluon.nn.Lambda"><code class="xref py py-obj docutils literal notranslate"><span class="pre">nn.Lambda</span></code></a></p></td> |
| <td><p>Wraps an operator or an expression as a Block object.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.HybridLambda" title="mxnet.gluon.nn.HybridLambda"><code class="xref py py-obj docutils literal notranslate"><span class="pre">nn.HybridLambda</span></code></a></p></td> |
| <td><p>Wraps an operator or an expression as a HybridBlock object.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| </div> |
| <div class="section" id="convolutional-layers"> |
| <h2>Convolutional Layers<a class="headerlink" href="#convolutional-layers" title="Permalink to this headline">¶</a></h2> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Conv1D" title="mxnet.gluon.nn.Conv1D"><code class="xref py py-obj docutils literal notranslate"><span class="pre">nn.Conv1D</span></code></a></p></td> |
| <td><p>1D convolution layer (e.g.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Conv2D" title="mxnet.gluon.nn.Conv2D"><code class="xref py py-obj docutils literal notranslate"><span class="pre">nn.Conv2D</span></code></a></p></td> |
| <td><p>2D convolution layer (e.g.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Conv3D" title="mxnet.gluon.nn.Conv3D"><code class="xref py py-obj docutils literal notranslate"><span class="pre">nn.Conv3D</span></code></a></p></td> |
| <td><p>3D convolution layer (e.g.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Conv1DTranspose" title="mxnet.gluon.nn.Conv1DTranspose"><code class="xref py py-obj docutils literal notranslate"><span class="pre">nn.Conv1DTranspose</span></code></a></p></td> |
| <td><p>Transposed 1D convolution layer (sometimes called Deconvolution).</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Conv2DTranspose" title="mxnet.gluon.nn.Conv2DTranspose"><code class="xref py py-obj docutils literal notranslate"><span class="pre">nn.Conv2DTranspose</span></code></a></p></td> |
| <td><p>Transposed 2D convolution layer (sometimes called Deconvolution).</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Conv3DTranspose" title="mxnet.gluon.nn.Conv3DTranspose"><code class="xref py py-obj docutils literal notranslate"><span class="pre">nn.Conv3DTranspose</span></code></a></p></td> |
| <td><p>Transposed 3D convolution layer (sometimes called Deconvolution).</p></td> |
| </tr> |
| </tbody> |
| </table> |
| </div> |
| <div class="section" id="pooling-layers"> |
| <h2>Pooling Layers<a class="headerlink" href="#pooling-layers" title="Permalink to this headline">¶</a></h2> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.MaxPool1D" title="mxnet.gluon.nn.MaxPool1D"><code class="xref py py-obj docutils literal notranslate"><span class="pre">nn.MaxPool1D</span></code></a></p></td> |
| <td><p>Max pooling operation for one dimensional data.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.MaxPool2D" title="mxnet.gluon.nn.MaxPool2D"><code class="xref py py-obj docutils literal notranslate"><span class="pre">nn.MaxPool2D</span></code></a></p></td> |
| <td><p>Max pooling operation for two dimensional (spatial) data.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.MaxPool3D" title="mxnet.gluon.nn.MaxPool3D"><code class="xref py py-obj docutils literal notranslate"><span class="pre">nn.MaxPool3D</span></code></a></p></td> |
| <td><p>Max pooling operation for 3D data (spatial or spatio-temporal).</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.AvgPool1D" title="mxnet.gluon.nn.AvgPool1D"><code class="xref py py-obj docutils literal notranslate"><span class="pre">nn.AvgPool1D</span></code></a></p></td> |
| <td><p>Average pooling operation for temporal data.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.AvgPool2D" title="mxnet.gluon.nn.AvgPool2D"><code class="xref py py-obj docutils literal notranslate"><span class="pre">nn.AvgPool2D</span></code></a></p></td> |
| <td><p>Average pooling operation for spatial data.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.AvgPool3D" title="mxnet.gluon.nn.AvgPool3D"><code class="xref py py-obj docutils literal notranslate"><span class="pre">nn.AvgPool3D</span></code></a></p></td> |
| <td><p>Average pooling operation for 3D data (spatial or spatio-temporal).</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.GlobalMaxPool1D" title="mxnet.gluon.nn.GlobalMaxPool1D"><code class="xref py py-obj docutils literal notranslate"><span class="pre">nn.GlobalMaxPool1D</span></code></a></p></td> |
| <td><p>Gloabl max pooling operation for one dimensional (temporal) data.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.GlobalMaxPool2D" title="mxnet.gluon.nn.GlobalMaxPool2D"><code class="xref py py-obj docutils literal notranslate"><span class="pre">nn.GlobalMaxPool2D</span></code></a></p></td> |
| <td><p>Global max pooling operation for two dimensional (spatial) data.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.GlobalMaxPool3D" title="mxnet.gluon.nn.GlobalMaxPool3D"><code class="xref py py-obj docutils literal notranslate"><span class="pre">nn.GlobalMaxPool3D</span></code></a></p></td> |
| <td><p>Global max pooling operation for 3D data (spatial or spatio-temporal).</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.GlobalAvgPool1D" title="mxnet.gluon.nn.GlobalAvgPool1D"><code class="xref py py-obj docutils literal notranslate"><span class="pre">nn.GlobalAvgPool1D</span></code></a></p></td> |
| <td><p>Global average pooling operation for temporal data.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.GlobalAvgPool2D" title="mxnet.gluon.nn.GlobalAvgPool2D"><code class="xref py py-obj docutils literal notranslate"><span class="pre">nn.GlobalAvgPool2D</span></code></a></p></td> |
| <td><p>Global average pooling operation for spatial data.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.GlobalAvgPool3D" title="mxnet.gluon.nn.GlobalAvgPool3D"><code class="xref py py-obj docutils literal notranslate"><span class="pre">nn.GlobalAvgPool3D</span></code></a></p></td> |
| <td><p>Global average pooling operation for 3D data (spatial or spatio-temporal).</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.ReflectionPad2D" title="mxnet.gluon.nn.ReflectionPad2D"><code class="xref py py-obj docutils literal notranslate"><span class="pre">nn.ReflectionPad2D</span></code></a></p></td> |
| <td><p>Pads the input tensor using the reflection of the input boundary.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| </div> |
| <div class="section" id="normalization-layers"> |
| <h2>Normalization Layers<a class="headerlink" href="#normalization-layers" title="Permalink to this headline">¶</a></h2> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.BatchNorm" title="mxnet.gluon.nn.BatchNorm"><code class="xref py py-obj docutils literal notranslate"><span class="pre">nn.BatchNorm</span></code></a></p></td> |
| <td><p>Batch normalization layer (Ioffe and Szegedy, 2014).</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.InstanceNorm" title="mxnet.gluon.nn.InstanceNorm"><code class="xref py py-obj docutils literal notranslate"><span class="pre">nn.InstanceNorm</span></code></a></p></td> |
| <td><p>Applies instance normalization to the n-dimensional input array.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.LayerNorm" title="mxnet.gluon.nn.LayerNorm"><code class="xref py py-obj docutils literal notranslate"><span class="pre">nn.LayerNorm</span></code></a></p></td> |
| <td><p>Applies layer normalization to the n-dimensional input array.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| </div> |
| <div class="section" id="embedding-layers"> |
| <h2>Embedding Layers<a class="headerlink" href="#embedding-layers" title="Permalink to this headline">¶</a></h2> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Embedding" title="mxnet.gluon.nn.Embedding"><code class="xref py py-obj docutils literal notranslate"><span class="pre">nn.Embedding</span></code></a></p></td> |
| <td><p>Turns non-negative integers (indexes/tokens) into dense vectors of fixed size.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| </div> |
| <div class="section" id="advanced-activation-layers"> |
| <h2>Advanced Activation Layers<a class="headerlink" href="#advanced-activation-layers" title="Permalink to this headline">¶</a></h2> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.LeakyReLU" title="mxnet.gluon.nn.LeakyReLU"><code class="xref py py-obj docutils literal notranslate"><span class="pre">nn.LeakyReLU</span></code></a></p></td> |
| <td><p>Leaky version of a Rectified Linear Unit.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.PReLU" title="mxnet.gluon.nn.PReLU"><code class="xref py py-obj docutils literal notranslate"><span class="pre">nn.PReLU</span></code></a></p></td> |
| <td><p>Parametric leaky version of a Rectified Linear Unit.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.ELU" title="mxnet.gluon.nn.ELU"><code class="xref py py-obj docutils literal notranslate"><span class="pre">nn.ELU</span></code></a></p></td> |
| <td><p>Exponential Linear Unit (ELU)</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.SELU" title="mxnet.gluon.nn.SELU"><code class="xref py py-obj docutils literal notranslate"><span class="pre">nn.SELU</span></code></a></p></td> |
| <td><p>Scaled Exponential Linear Unit (SELU)</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Swish" title="mxnet.gluon.nn.Swish"><code class="xref py py-obj docutils literal notranslate"><span class="pre">nn.Swish</span></code></a></p></td> |
| <td><p>Swish Activation function</p></td> |
| </tr> |
| </tbody> |
| </table> |
| </div> |
| <div class="section" id="module-mxnet.gluon.nn"> |
| <span id="api-reference"></span><h2>API Reference<a class="headerlink" href="#module-mxnet.gluon.nn" title="Permalink to this headline">¶</a></h2> |
| <p>Neural network layers.</p> |
| <p><strong>Classes</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Activation" title="mxnet.gluon.nn.Activation"><code class="xref py py-obj docutils literal notranslate"><span class="pre">Activation</span></code></a>(activation, **kwargs)</p></td> |
| <td><p>Applies an activation function to input.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.AvgPool1D" title="mxnet.gluon.nn.AvgPool1D"><code class="xref py py-obj docutils literal notranslate"><span class="pre">AvgPool1D</span></code></a>([pool_size, strides, padding, …])</p></td> |
| <td><p>Average pooling operation for temporal data.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.AvgPool2D" title="mxnet.gluon.nn.AvgPool2D"><code class="xref py py-obj docutils literal notranslate"><span class="pre">AvgPool2D</span></code></a>([pool_size, strides, padding, …])</p></td> |
| <td><p>Average pooling operation for spatial data.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.AvgPool3D" title="mxnet.gluon.nn.AvgPool3D"><code class="xref py py-obj docutils literal notranslate"><span class="pre">AvgPool3D</span></code></a>([pool_size, strides, padding, …])</p></td> |
| <td><p>Average pooling operation for 3D data (spatial or spatio-temporal).</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.BatchNorm" title="mxnet.gluon.nn.BatchNorm"><code class="xref py py-obj docutils literal notranslate"><span class="pre">BatchNorm</span></code></a>([axis, momentum, epsilon, center, …])</p></td> |
| <td><p>Batch normalization layer (Ioffe and Szegedy, 2014).</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.BatchNormReLU" title="mxnet.gluon.nn.BatchNormReLU"><code class="xref py py-obj docutils literal notranslate"><span class="pre">BatchNormReLU</span></code></a>([axis, momentum, epsilon, …])</p></td> |
| <td><p>Batch normalization layer (Ioffe and Szegedy, 2014).</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Block" title="mxnet.gluon.nn.Block"><code class="xref py py-obj docutils literal notranslate"><span class="pre">Block</span></code></a>([prefix, params])</p></td> |
| <td><p>Base class for all neural network layers and models.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Conv1D" title="mxnet.gluon.nn.Conv1D"><code class="xref py py-obj docutils literal notranslate"><span class="pre">Conv1D</span></code></a>(channels, kernel_size[, strides, …])</p></td> |
| <td><p>1D convolution layer (e.g. temporal convolution).</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Conv1DTranspose" title="mxnet.gluon.nn.Conv1DTranspose"><code class="xref py py-obj docutils literal notranslate"><span class="pre">Conv1DTranspose</span></code></a>(channels, kernel_size[, …])</p></td> |
| <td><p>Transposed 1D convolution layer (sometimes called Deconvolution).</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Conv2D" title="mxnet.gluon.nn.Conv2D"><code class="xref py py-obj docutils literal notranslate"><span class="pre">Conv2D</span></code></a>(channels, kernel_size[, strides, …])</p></td> |
| <td><p>2D convolution layer (e.g. spatial convolution over images).</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Conv2DTranspose" title="mxnet.gluon.nn.Conv2DTranspose"><code class="xref py py-obj docutils literal notranslate"><span class="pre">Conv2DTranspose</span></code></a>(channels, kernel_size[, …])</p></td> |
| <td><p>Transposed 2D convolution layer (sometimes called Deconvolution).</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Conv3D" title="mxnet.gluon.nn.Conv3D"><code class="xref py py-obj docutils literal notranslate"><span class="pre">Conv3D</span></code></a>(channels, kernel_size[, strides, …])</p></td> |
| <td><p>3D convolution layer (e.g. spatial convolution over volumes).</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Conv3DTranspose" title="mxnet.gluon.nn.Conv3DTranspose"><code class="xref py py-obj docutils literal notranslate"><span class="pre">Conv3DTranspose</span></code></a>(channels, kernel_size[, …])</p></td> |
| <td><p>Transposed 3D convolution layer (sometimes called Deconvolution).</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Dense" title="mxnet.gluon.nn.Dense"><code class="xref py py-obj docutils literal notranslate"><span class="pre">Dense</span></code></a>(units[, activation, use_bias, …])</p></td> |
| <td><p>Just your regular densely-connected NN layer.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Dropout" title="mxnet.gluon.nn.Dropout"><code class="xref py py-obj docutils literal notranslate"><span class="pre">Dropout</span></code></a>(rate[, axes])</p></td> |
| <td><p>Applies Dropout to the input.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.ELU" title="mxnet.gluon.nn.ELU"><code class="xref py py-obj docutils literal notranslate"><span class="pre">ELU</span></code></a>([alpha])</p></td> |
| <td><p>Exponential Linear Unit (ELU)</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Embedding" title="mxnet.gluon.nn.Embedding"><code class="xref py py-obj docutils literal notranslate"><span class="pre">Embedding</span></code></a>(input_dim, output_dim[, dtype, …])</p></td> |
| <td><p>Turns non-negative integers (indexes/tokens) into dense vectors of fixed size.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Flatten" title="mxnet.gluon.nn.Flatten"><code class="xref py py-obj docutils literal notranslate"><span class="pre">Flatten</span></code></a>(**kwargs)</p></td> |
| <td><p>Flattens the input to two dimensional.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.GELU" title="mxnet.gluon.nn.GELU"><code class="xref py py-obj docutils literal notranslate"><span class="pre">GELU</span></code></a>(**kwargs)</p></td> |
| <td><p>Gaussian Exponential Linear Unit (GELU)</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.GlobalAvgPool1D" title="mxnet.gluon.nn.GlobalAvgPool1D"><code class="xref py py-obj docutils literal notranslate"><span class="pre">GlobalAvgPool1D</span></code></a>([layout])</p></td> |
| <td><p>Global average pooling operation for temporal data.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.GlobalAvgPool2D" title="mxnet.gluon.nn.GlobalAvgPool2D"><code class="xref py py-obj docutils literal notranslate"><span class="pre">GlobalAvgPool2D</span></code></a>([layout])</p></td> |
| <td><p>Global average pooling operation for spatial data.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.GlobalAvgPool3D" title="mxnet.gluon.nn.GlobalAvgPool3D"><code class="xref py py-obj docutils literal notranslate"><span class="pre">GlobalAvgPool3D</span></code></a>([layout])</p></td> |
| <td><p>Global average pooling operation for 3D data (spatial or spatio-temporal).</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.GlobalMaxPool1D" title="mxnet.gluon.nn.GlobalMaxPool1D"><code class="xref py py-obj docutils literal notranslate"><span class="pre">GlobalMaxPool1D</span></code></a>([layout])</p></td> |
| <td><p>Gloabl max pooling operation for one dimensional (temporal) data.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.GlobalMaxPool2D" title="mxnet.gluon.nn.GlobalMaxPool2D"><code class="xref py py-obj docutils literal notranslate"><span class="pre">GlobalMaxPool2D</span></code></a>([layout])</p></td> |
| <td><p>Global max pooling operation for two dimensional (spatial) data.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.GlobalMaxPool3D" title="mxnet.gluon.nn.GlobalMaxPool3D"><code class="xref py py-obj docutils literal notranslate"><span class="pre">GlobalMaxPool3D</span></code></a>([layout])</p></td> |
| <td><p>Global max pooling operation for 3D data (spatial or spatio-temporal).</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.GroupNorm" title="mxnet.gluon.nn.GroupNorm"><code class="xref py py-obj docutils literal notranslate"><span class="pre">GroupNorm</span></code></a>([num_groups, epsilon, center, …])</p></td> |
| <td><p>Applies group normalization to the n-dimensional input array.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.HybridBlock" title="mxnet.gluon.nn.HybridBlock"><code class="xref py py-obj docutils literal notranslate"><span class="pre">HybridBlock</span></code></a>([prefix, params])</p></td> |
| <td><p><cite>HybridBlock</cite> supports forwarding with both Symbol and NDArray.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.HybridLambda" title="mxnet.gluon.nn.HybridLambda"><code class="xref py py-obj docutils literal notranslate"><span class="pre">HybridLambda</span></code></a>(function[, prefix])</p></td> |
| <td><p>Wraps an operator or an expression as a HybridBlock object.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.HybridSequential" title="mxnet.gluon.nn.HybridSequential"><code class="xref py py-obj docutils literal notranslate"><span class="pre">HybridSequential</span></code></a>([prefix, params])</p></td> |
| <td><p>Stacks HybridBlocks sequentially.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.InstanceNorm" title="mxnet.gluon.nn.InstanceNorm"><code class="xref py py-obj docutils literal notranslate"><span class="pre">InstanceNorm</span></code></a>([axis, epsilon, center, scale, …])</p></td> |
| <td><p>Applies instance normalization to the n-dimensional input array.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Lambda" title="mxnet.gluon.nn.Lambda"><code class="xref py py-obj docutils literal notranslate"><span class="pre">Lambda</span></code></a>(function[, prefix])</p></td> |
| <td><p>Wraps an operator or an expression as a Block object.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.LayerNorm" title="mxnet.gluon.nn.LayerNorm"><code class="xref py py-obj docutils literal notranslate"><span class="pre">LayerNorm</span></code></a>([axis, epsilon, center, scale, …])</p></td> |
| <td><p>Applies layer normalization to the n-dimensional input array.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.LeakyReLU" title="mxnet.gluon.nn.LeakyReLU"><code class="xref py py-obj docutils literal notranslate"><span class="pre">LeakyReLU</span></code></a>(alpha, **kwargs)</p></td> |
| <td><p>Leaky version of a Rectified Linear Unit.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.MaxPool1D" title="mxnet.gluon.nn.MaxPool1D"><code class="xref py py-obj docutils literal notranslate"><span class="pre">MaxPool1D</span></code></a>([pool_size, strides, padding, …])</p></td> |
| <td><p>Max pooling operation for one dimensional data.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.MaxPool2D" title="mxnet.gluon.nn.MaxPool2D"><code class="xref py py-obj docutils literal notranslate"><span class="pre">MaxPool2D</span></code></a>([pool_size, strides, padding, …])</p></td> |
| <td><p>Max pooling operation for two dimensional (spatial) data.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.MaxPool3D" title="mxnet.gluon.nn.MaxPool3D"><code class="xref py py-obj docutils literal notranslate"><span class="pre">MaxPool3D</span></code></a>([pool_size, strides, padding, …])</p></td> |
| <td><p>Max pooling operation for 3D data (spatial or spatio-temporal).</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.PReLU" title="mxnet.gluon.nn.PReLU"><code class="xref py py-obj docutils literal notranslate"><span class="pre">PReLU</span></code></a>([alpha_initializer, in_channels])</p></td> |
| <td><p>Parametric leaky version of a Rectified Linear Unit.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.ReflectionPad2D" title="mxnet.gluon.nn.ReflectionPad2D"><code class="xref py py-obj docutils literal notranslate"><span class="pre">ReflectionPad2D</span></code></a>([padding])</p></td> |
| <td><p>Pads the input tensor using the reflection of the input boundary.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.SELU" title="mxnet.gluon.nn.SELU"><code class="xref py py-obj docutils literal notranslate"><span class="pre">SELU</span></code></a>(**kwargs)</p></td> |
| <td><p>Scaled Exponential Linear Unit (SELU)</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Sequential" title="mxnet.gluon.nn.Sequential"><code class="xref py py-obj docutils literal notranslate"><span class="pre">Sequential</span></code></a>([prefix, params])</p></td> |
| <td><p>Stacks Blocks sequentially.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Swish" title="mxnet.gluon.nn.Swish"><code class="xref py py-obj docutils literal notranslate"><span class="pre">Swish</span></code></a>([beta])</p></td> |
| <td><p>Swish Activation function</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.SymbolBlock" title="mxnet.gluon.nn.SymbolBlock"><code class="xref py py-obj docutils literal notranslate"><span class="pre">SymbolBlock</span></code></a>(outputs, inputs[, params])</p></td> |
| <td><p>Construct block from symbol.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.Activation"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">Activation</code><span class="sig-paren">(</span><em class="sig-param">activation</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/activations.html#Activation"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Activation" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.block.HybridBlock</span></code></p> |
| <p>Applies an activation function to input.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><p><strong>activation</strong> (<em>str</em>) – Name of activation function to use. |
| See <a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.Activation" title="mxnet.ndarray.Activation"><code class="xref py py-func docutils literal notranslate"><span class="pre">Activation()</span></code></a> for available choices.</p> |
| </dd> |
| </dl> |
| <p><strong>Methods</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Activation.hybrid_forward" title="mxnet.gluon.nn.Activation.hybrid_forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybrid_forward</span></code></a>(F, x)</p></td> |
| <td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <dl class="simple"> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>data</strong>: input tensor with arbitrary shape.</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul class="simple"> |
| <li><p><strong>out</strong>: output tensor with the same shape as <cite>data</cite>.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.Activation.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">x</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/activations.html#Activation.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Activation.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.AvgPool1D"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">AvgPool1D</code><span class="sig-paren">(</span><em class="sig-param">pool_size=2</em>, <em class="sig-param">strides=None</em>, <em class="sig-param">padding=0</em>, <em class="sig-param">layout='NCW'</em>, <em class="sig-param">ceil_mode=False</em>, <em class="sig-param">count_include_pad=True</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/conv_layers.html#AvgPool1D"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.AvgPool1D" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.nn.conv_layers._Pooling</span></code></p> |
| <p>Average pooling operation for temporal data.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>pool_size</strong> (<em>int</em>) – Size of the average pooling windows.</p></li> |
| <li><p><strong>strides</strong> (<em>int</em><em>, or </em><em>None</em>) – Factor by which to downscale. E.g. 2 will halve the input size. |
| If <cite>None</cite>, it will default to <cite>pool_size</cite>.</p></li> |
| <li><p><strong>padding</strong> (<em>int</em>) – If padding is non-zero, then the input is implicitly |
| zero-padded on both sides for padding number of points.</p></li> |
| <li><p><strong>layout</strong> (<em>str</em><em>, </em><em>default 'NCW'</em>) – Dimension ordering of data and out (‘NCW’ or ‘NWC’). |
| ‘N’, ‘C’, ‘W’ stands for batch, channel, and width (time) dimensions |
| respectively. padding is applied on ‘W’ dimension.</p></li> |
| <li><p><strong>ceil_mode</strong> (<em>bool</em><em>, </em><em>default False</em>) – When <cite>True</cite>, will use ceil instead of floor to compute the output shape.</p></li> |
| <li><p><strong>count_include_pad</strong> (<em>bool</em><em>, </em><em>default True</em>) – When ‘False’, will exclude padding elements when computing the average value.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>data</strong>: 3D input tensor with shape <cite>(batch_size, in_channels, width)</cite> |
| when <cite>layout</cite> is <cite>NCW</cite>. For other layouts shape is permuted accordingly.</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul> |
| <li><p><strong>out</strong>: 3D output tensor with shape <cite>(batch_size, channels, out_width)</cite> |
| when <cite>layout</cite> is <cite>NCW</cite>. out_width is calculated as:</p> |
| <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">out_width</span> <span class="o">=</span> <span class="n">floor</span><span class="p">((</span><span class="n">width</span><span class="o">+</span><span class="mi">2</span><span class="o">*</span><span class="n">padding</span><span class="o">-</span><span class="n">pool_size</span><span class="p">)</span><span class="o">/</span><span class="n">strides</span><span class="p">)</span><span class="o">+</span><span class="mi">1</span> |
| </pre></div> |
| </div> |
| <p>When <cite>ceil_mode</cite> is <cite>True</cite>, ceil will be used instead of floor in this |
| equation.</p> |
| </li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.AvgPool2D"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">AvgPool2D</code><span class="sig-paren">(</span><em class="sig-param">pool_size=(2</em>, <em class="sig-param">2)</em>, <em class="sig-param">strides=None</em>, <em class="sig-param">padding=0</em>, <em class="sig-param">ceil_mode=False</em>, <em class="sig-param">layout='NCHW'</em>, <em class="sig-param">count_include_pad=True</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/conv_layers.html#AvgPool2D"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.AvgPool2D" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.nn.conv_layers._Pooling</span></code></p> |
| <p>Average pooling operation for spatial data.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>pool_size</strong> (<em>int</em><em> or </em><em>list/tuple of 2 ints</em><em>,</em>) – Size of the average pooling windows.</p></li> |
| <li><p><strong>strides</strong> (<em>int</em><em>, </em><em>list/tuple of 2 ints</em><em>, or </em><em>None.</em>) – Factor by which to downscale. E.g. 2 will halve the input size. |
| If <cite>None</cite>, it will default to <cite>pool_size</cite>.</p></li> |
| <li><p><strong>padding</strong> (<em>int</em><em> or </em><em>list/tuple of 2 ints</em><em>,</em>) – If padding is non-zero, then the input is implicitly |
| zero-padded on both sides for padding number of points.</p></li> |
| <li><p><strong>layout</strong> (<em>str</em><em>, </em><em>default 'NCHW'</em>) – Dimension ordering of data and out (‘NCHW’ or ‘NHWC’). |
| ‘N’, ‘C’, ‘H’, ‘W’ stands for batch, channel, height, and width |
| dimensions respectively. padding is applied on ‘H’ and ‘W’ dimension.</p></li> |
| <li><p><strong>ceil_mode</strong> (<em>bool</em><em>, </em><em>default False</em>) – When True, will use ceil instead of floor to compute the output shape.</p></li> |
| <li><p><strong>count_include_pad</strong> (<em>bool</em><em>, </em><em>default True</em>) – When ‘False’, will exclude padding elements when computing the average value.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>data</strong>: 4D input tensor with shape |
| <cite>(batch_size, in_channels, height, width)</cite> when <cite>layout</cite> is <cite>NCHW</cite>. |
| For other layouts shape is permuted accordingly.</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul> |
| <li><p><strong>out</strong>: 4D output tensor with shape |
| <cite>(batch_size, channels, out_height, out_width)</cite> when <cite>layout</cite> is <cite>NCHW</cite>. |
| out_height and out_width are calculated as:</p> |
| <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">out_height</span> <span class="o">=</span> <span class="n">floor</span><span class="p">((</span><span class="n">height</span><span class="o">+</span><span class="mi">2</span><span class="o">*</span><span class="n">padding</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">-</span><span class="n">pool_size</span><span class="p">[</span><span class="mi">0</span><span class="p">])</span><span class="o">/</span><span class="n">strides</span><span class="p">[</span><span class="mi">0</span><span class="p">])</span><span class="o">+</span><span class="mi">1</span> |
| <span class="n">out_width</span> <span class="o">=</span> <span class="n">floor</span><span class="p">((</span><span class="n">width</span><span class="o">+</span><span class="mi">2</span><span class="o">*</span><span class="n">padding</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span><span class="o">-</span><span class="n">pool_size</span><span class="p">[</span><span class="mi">1</span><span class="p">])</span><span class="o">/</span><span class="n">strides</span><span class="p">[</span><span class="mi">1</span><span class="p">])</span><span class="o">+</span><span class="mi">1</span> |
| </pre></div> |
| </div> |
| <p>When <cite>ceil_mode</cite> is <cite>True</cite>, ceil will be used instead of floor in this |
| equation.</p> |
| </li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.AvgPool3D"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">AvgPool3D</code><span class="sig-paren">(</span><em class="sig-param">pool_size=(2</em>, <em class="sig-param">2</em>, <em class="sig-param">2)</em>, <em class="sig-param">strides=None</em>, <em class="sig-param">padding=0</em>, <em class="sig-param">ceil_mode=False</em>, <em class="sig-param">layout='NCDHW'</em>, <em class="sig-param">count_include_pad=True</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/conv_layers.html#AvgPool3D"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.AvgPool3D" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.nn.conv_layers._Pooling</span></code></p> |
| <p>Average pooling operation for 3D data (spatial or spatio-temporal).</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>pool_size</strong> (<em>int</em><em> or </em><em>list/tuple of 3 ints</em><em>,</em>) – Size of the average pooling windows.</p></li> |
| <li><p><strong>strides</strong> (<em>int</em><em>, </em><em>list/tuple of 3 ints</em><em>, or </em><em>None.</em>) – Factor by which to downscale. E.g. 2 will halve the input size. |
| If <cite>None</cite>, it will default to <cite>pool_size</cite>.</p></li> |
| <li><p><strong>padding</strong> (<em>int</em><em> or </em><em>list/tuple of 3 ints</em><em>,</em>) – If padding is non-zero, then the input is implicitly |
| zero-padded on both sides for padding number of points.</p></li> |
| <li><p><strong>layout</strong> (<em>str</em><em>, </em><em>default 'NCDHW'</em>) – Dimension ordering of data and out (‘NCDHW’ or ‘NDHWC’). |
| ‘N’, ‘C’, ‘H’, ‘W’, ‘D’ stands for batch, channel, height, width and |
| depth dimensions respectively. padding is applied on ‘D’, ‘H’ and ‘W’ |
| dimension.</p></li> |
| <li><p><strong>ceil_mode</strong> (<em>bool</em><em>, </em><em>default False</em>) – When True, will use ceil instead of floor to compute the output shape.</p></li> |
| <li><p><strong>count_include_pad</strong> (<em>bool</em><em>, </em><em>default True</em>) – When ‘False’, will exclude padding elements when computing the average value.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>data</strong>: 5D input tensor with shape |
| <cite>(batch_size, in_channels, depth, height, width)</cite> when <cite>layout</cite> is <cite>NCDHW</cite>. |
| For other layouts shape is permuted accordingly.</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul> |
| <li><p><strong>out</strong>: 5D output tensor with shape |
| <cite>(batch_size, channels, out_depth, out_height, out_width)</cite> when <cite>layout</cite> is <cite>NCDHW</cite>. |
| out_depth, out_height and out_width are calculated as:</p> |
| <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">out_depth</span> <span class="o">=</span> <span class="n">floor</span><span class="p">((</span><span class="n">depth</span><span class="o">+</span><span class="mi">2</span><span class="o">*</span><span class="n">padding</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">-</span><span class="n">pool_size</span><span class="p">[</span><span class="mi">0</span><span class="p">])</span><span class="o">/</span><span class="n">strides</span><span class="p">[</span><span class="mi">0</span><span class="p">])</span><span class="o">+</span><span class="mi">1</span> |
| <span class="n">out_height</span> <span class="o">=</span> <span class="n">floor</span><span class="p">((</span><span class="n">height</span><span class="o">+</span><span class="mi">2</span><span class="o">*</span><span class="n">padding</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span><span class="o">-</span><span class="n">pool_size</span><span class="p">[</span><span class="mi">1</span><span class="p">])</span><span class="o">/</span><span class="n">strides</span><span class="p">[</span><span class="mi">1</span><span class="p">])</span><span class="o">+</span><span class="mi">1</span> |
| <span class="n">out_width</span> <span class="o">=</span> <span class="n">floor</span><span class="p">((</span><span class="n">width</span><span class="o">+</span><span class="mi">2</span><span class="o">*</span><span class="n">padding</span><span class="p">[</span><span class="mi">2</span><span class="p">]</span><span class="o">-</span><span class="n">pool_size</span><span class="p">[</span><span class="mi">2</span><span class="p">])</span><span class="o">/</span><span class="n">strides</span><span class="p">[</span><span class="mi">2</span><span class="p">])</span><span class="o">+</span><span class="mi">1</span> |
| </pre></div> |
| </div> |
| <p>When <cite>ceil_mode</cite> is <cite>True,</cite> ceil will be used instead of floor in this |
| equation.</p> |
| </li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.BatchNorm"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">BatchNorm</code><span class="sig-paren">(</span><em class="sig-param">axis=1</em>, <em class="sig-param">momentum=0.9</em>, <em class="sig-param">epsilon=1e-05</em>, <em class="sig-param">center=True</em>, <em class="sig-param">scale=True</em>, <em class="sig-param">use_global_stats=False</em>, <em class="sig-param">beta_initializer='zeros'</em>, <em class="sig-param">gamma_initializer='ones'</em>, <em class="sig-param">running_mean_initializer='zeros'</em>, <em class="sig-param">running_variance_initializer='ones'</em>, <em class="sig-param">in_channels=0</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/basic_layers.html#BatchNorm"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.BatchNorm" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.nn.basic_layers._BatchNorm</span></code></p> |
| <p>Batch normalization layer (Ioffe and Szegedy, 2014). |
| Normalizes the input at each batch, i.e. applies a transformation |
| that maintains the mean activation close to 0 and the activation |
| standard deviation close to 1.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>axis</strong> (<em>int</em><em>, </em><em>default 1</em>) – The axis that should be normalized. This is typically the channels |
| (C) axis. For instance, after a <cite>Conv2D</cite> layer with <cite>layout=’NCHW’</cite>, |
| set <cite>axis=1</cite> in <cite>BatchNorm</cite>. If <cite>layout=’NHWC’</cite>, then set <cite>axis=3</cite>.</p></li> |
| <li><p><strong>momentum</strong> (<em>float</em><em>, </em><em>default 0.9</em>) – Momentum for the moving average.</p></li> |
| <li><p><strong>epsilon</strong> (<em>float</em><em>, </em><em>default 1e-5</em>) – Small float added to variance to avoid dividing by zero.</p></li> |
| <li><p><strong>center</strong> (<em>bool</em><em>, </em><em>default True</em>) – If True, add offset of <cite>beta</cite> to normalized tensor. |
| If False, <cite>beta</cite> is ignored.</p></li> |
| <li><p><strong>scale</strong> (<em>bool</em><em>, </em><em>default True</em>) – If True, multiply by <cite>gamma</cite>. If False, <cite>gamma</cite> is not used. |
| When the next layer is linear (also e.g. <cite>nn.relu</cite>), |
| this can be disabled since the scaling |
| will be done by the next layer.</p></li> |
| <li><p><strong>use_global_stats</strong> (<em>bool</em><em>, </em><em>default False</em>) – If True, use global moving statistics instead of local batch-norm. This will force |
| change batch-norm into a scale shift operator. |
| If False, use local batch-norm.</p></li> |
| <li><p><strong>beta_initializer</strong> (str or <cite>Initializer</cite>, default ‘zeros’) – Initializer for the beta weight.</p></li> |
| <li><p><strong>gamma_initializer</strong> (str or <cite>Initializer</cite>, default ‘ones’) – Initializer for the gamma weight.</p></li> |
| <li><p><strong>running_mean_initializer</strong> (str or <cite>Initializer</cite>, default ‘zeros’) – Initializer for the running mean.</p></li> |
| <li><p><strong>running_variance_initializer</strong> (str or <cite>Initializer</cite>, default ‘ones’) – Initializer for the running variance.</p></li> |
| <li><p><strong>in_channels</strong> (<em>int</em><em>, </em><em>default 0</em>) – Number of channels (feature maps) in input data. If not specified, |
| initialization will be deferred to the first time <cite>forward</cite> is called |
| and <cite>in_channels</cite> will be inferred from the shape of input data.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="simple"> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>data</strong>: input tensor with arbitrary shape.</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul class="simple"> |
| <li><p><strong>out</strong>: output tensor with the same shape as <cite>data</cite>.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.BatchNormReLU"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">BatchNormReLU</code><span class="sig-paren">(</span><em class="sig-param">axis=1</em>, <em class="sig-param">momentum=0.9</em>, <em class="sig-param">epsilon=1e-05</em>, <em class="sig-param">center=True</em>, <em class="sig-param">scale=True</em>, <em class="sig-param">use_global_stats=False</em>, <em class="sig-param">beta_initializer='zeros'</em>, <em class="sig-param">gamma_initializer='ones'</em>, <em class="sig-param">running_mean_initializer='zeros'</em>, <em class="sig-param">running_variance_initializer='ones'</em>, <em class="sig-param">in_channels=0</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/basic_layers.html#BatchNormReLU"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.BatchNormReLU" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.nn.basic_layers._BatchNorm</span></code></p> |
| <p>Batch normalization layer (Ioffe and Szegedy, 2014). |
| Normalizes the input at each batch, i.e. applies a transformation |
| that maintains the mean activation close to 0 and the activation |
| standard deviation close to 1.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>axis</strong> (<em>int</em><em>, </em><em>default 1</em>) – The axis that should be normalized. This is typically the channels |
| (C) axis. For instance, after a <cite>Conv2D</cite> layer with <cite>layout=’NCHW’</cite>, |
| set <cite>axis=1</cite> in <cite>BatchNorm</cite>. If <cite>layout=’NHWC’</cite>, then set <cite>axis=3</cite>.</p></li> |
| <li><p><strong>momentum</strong> (<em>float</em><em>, </em><em>default 0.9</em>) – Momentum for the moving average.</p></li> |
| <li><p><strong>epsilon</strong> (<em>float</em><em>, </em><em>default 1e-5</em>) – Small float added to variance to avoid dividing by zero.</p></li> |
| <li><p><strong>center</strong> (<em>bool</em><em>, </em><em>default True</em>) – If True, add offset of <cite>beta</cite> to normalized tensor. |
| If False, <cite>beta</cite> is ignored.</p></li> |
| <li><p><strong>scale</strong> (<em>bool</em><em>, </em><em>default True</em>) – If True, multiply by <cite>gamma</cite>. If False, <cite>gamma</cite> is not used. |
| When the next layer is linear (also e.g. <cite>nn.relu</cite>), |
| this can be disabled since the scaling |
| will be done by the next layer.</p></li> |
| <li><p><strong>use_global_stats</strong> (<em>bool</em><em>, </em><em>default False</em>) – If True, use global moving statistics instead of local batch-norm. This will force |
| change batch-norm into a scale shift operator. |
| If False, use local batch-norm.</p></li> |
| <li><p><strong>beta_initializer</strong> (str or <cite>Initializer</cite>, default ‘zeros’) – Initializer for the beta weight.</p></li> |
| <li><p><strong>gamma_initializer</strong> (str or <cite>Initializer</cite>, default ‘ones’) – Initializer for the gamma weight.</p></li> |
| <li><p><strong>running_mean_initializer</strong> (str or <cite>Initializer</cite>, default ‘zeros’) – Initializer for the running mean.</p></li> |
| <li><p><strong>running_variance_initializer</strong> (str or <cite>Initializer</cite>, default ‘ones’) – Initializer for the running variance.</p></li> |
| <li><p><strong>in_channels</strong> (<em>int</em><em>, </em><em>default 0</em>) – Number of channels (feature maps) in input data. If not specified, |
| initialization will be deferred to the first time <cite>forward</cite> is called |
| and <cite>in_channels</cite> will be inferred from the shape of input data.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="simple"> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>data</strong>: input tensor with arbitrary shape.</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul class="simple"> |
| <li><p><strong>out</strong>: output tensor with the same shape as <cite>data</cite>.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.Block"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">Block</code><span class="sig-paren">(</span><em class="sig-param">prefix=None</em>, <em class="sig-param">params=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/block.html#Block"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Block" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">object</span></code></p> |
| <p>Base class for all neural network layers and models. Your models should |
| subclass this class.</p> |
| <p><a class="reference internal" href="#mxnet.gluon.nn.Block" title="mxnet.gluon.nn.Block"><code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code></a> can be nested recursively in a tree structure. You can create and |
| assign child <a class="reference internal" href="#mxnet.gluon.nn.Block" title="mxnet.gluon.nn.Block"><code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code></a> as regular attributes:</p> |
| <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">mxnet.gluon</span> <span class="kn">import</span> <span class="n">Block</span><span class="p">,</span> <span class="n">nn</span> |
| <span class="kn">from</span> <span class="nn">mxnet</span> <span class="kn">import</span> <span class="n">ndarray</span> <span class="k">as</span> <span class="n">F</span> |
| |
| <span class="k">class</span> <span class="nc">Model</span><span class="p">(</span><span class="n">Block</span><span class="p">):</span> |
| <span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span> |
| <span class="nb">super</span><span class="p">(</span><span class="n">Model</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="fm">__init__</span><span class="p">(</span><span class="o">**</span><span class="n">kwargs</span><span class="p">)</span> |
| <span class="c1"># use name_scope to give child Blocks appropriate names.</span> |
| <span class="k">with</span> <span class="bp">self</span><span class="o">.</span><span class="n">name_scope</span><span class="p">():</span> |
| <span class="bp">self</span><span class="o">.</span><span class="n">dense0</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">)</span> |
| <span class="bp">self</span><span class="o">.</span><span class="n">dense1</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">)</span> |
| |
| <span class="k">def</span> <span class="nf">forward</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">x</span><span class="p">):</span> |
| <span class="n">x</span> <span class="o">=</span> <span class="n">F</span><span class="o">.</span><span class="n">relu</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">dense0</span><span class="p">(</span><span class="n">x</span><span class="p">))</span> |
| <span class="k">return</span> <span class="n">F</span><span class="o">.</span><span class="n">relu</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">dense1</span><span class="p">(</span><span class="n">x</span><span class="p">))</span> |
| |
| <span class="n">model</span> <span class="o">=</span> <span class="n">Model</span><span class="p">()</span> |
| <span class="n">model</span><span class="o">.</span><span class="n">initialize</span><span class="p">(</span><span class="n">ctx</span><span class="o">=</span><span class="n">mx</span><span class="o">.</span><span class="n">cpu</span><span class="p">(</span><span class="mi">0</span><span class="p">))</span> |
| <span class="n">model</span><span class="p">(</span><span class="n">F</span><span class="o">.</span><span class="n">zeros</span><span class="p">((</span><span class="mi">10</span><span class="p">,</span> <span class="mi">10</span><span class="p">),</span> <span class="n">ctx</span><span class="o">=</span><span class="n">mx</span><span class="o">.</span><span class="n">cpu</span><span class="p">(</span><span class="mi">0</span><span class="p">)))</span> |
| </pre></div> |
| </div> |
| <p><strong>Methods</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Block.apply" title="mxnet.gluon.nn.Block.apply"><code class="xref py py-obj docutils literal notranslate"><span class="pre">apply</span></code></a>(fn)</p></td> |
| <td><p>Applies <code class="docutils literal notranslate"><span class="pre">fn</span></code> recursively to every child block as well as self.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Block.cast" title="mxnet.gluon.nn.Block.cast"><code class="xref py py-obj docutils literal notranslate"><span class="pre">cast</span></code></a>(dtype)</p></td> |
| <td><p>Cast this Block to use another data type.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Block.collect_params" title="mxnet.gluon.nn.Block.collect_params"><code class="xref py py-obj docutils literal notranslate"><span class="pre">collect_params</span></code></a>([select])</p></td> |
| <td><p>Returns a <code class="xref py py-class docutils literal notranslate"><span class="pre">ParameterDict</span></code> containing this <a class="reference internal" href="#mxnet.gluon.nn.Block" title="mxnet.gluon.nn.Block"><code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code></a> and all of its children’s Parameters(default), also can returns the select <code class="xref py py-class docutils literal notranslate"><span class="pre">ParameterDict</span></code> which match some given regular expressions.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Block.forward" title="mxnet.gluon.nn.Block.forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">forward</span></code></a>(*args)</p></td> |
| <td><p>Overrides to implement forward computation using <code class="xref py py-class docutils literal notranslate"><span class="pre">NDArray</span></code>.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Block.hybridize" title="mxnet.gluon.nn.Block.hybridize"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybridize</span></code></a>([active])</p></td> |
| <td><p>Please refer description of HybridBlock hybridize().</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Block.initialize" title="mxnet.gluon.nn.Block.initialize"><code class="xref py py-obj docutils literal notranslate"><span class="pre">initialize</span></code></a>([init, ctx, verbose, force_reinit])</p></td> |
| <td><p>Initializes <code class="xref py py-class docutils literal notranslate"><span class="pre">Parameter</span></code> s of this <a class="reference internal" href="#mxnet.gluon.nn.Block" title="mxnet.gluon.nn.Block"><code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code></a> and its children.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Block.load_parameters" title="mxnet.gluon.nn.Block.load_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load_parameters</span></code></a>(filename[, ctx, …])</p></td> |
| <td><p>Load parameters from file previously saved by <cite>save_parameters</cite>.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Block.load_params" title="mxnet.gluon.nn.Block.load_params"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load_params</span></code></a>(filename[, ctx, allow_missing, …])</p></td> |
| <td><p>[Deprecated] Please use load_parameters.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Block.name_scope" title="mxnet.gluon.nn.Block.name_scope"><code class="xref py py-obj docutils literal notranslate"><span class="pre">name_scope</span></code></a>()</p></td> |
| <td><p>Returns a name space object managing a child <a class="reference internal" href="#mxnet.gluon.nn.Block" title="mxnet.gluon.nn.Block"><code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code></a> and parameter names.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Block.register_child" title="mxnet.gluon.nn.Block.register_child"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_child</span></code></a>(block[, name])</p></td> |
| <td><p>Registers block as a child of self.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Block.register_forward_hook" title="mxnet.gluon.nn.Block.register_forward_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_forward_hook</span></code></a>(hook)</p></td> |
| <td><p>Registers a forward hook on the block.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Block.register_forward_pre_hook" title="mxnet.gluon.nn.Block.register_forward_pre_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_forward_pre_hook</span></code></a>(hook)</p></td> |
| <td><p>Registers a forward pre-hook on the block.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Block.register_op_hook" title="mxnet.gluon.nn.Block.register_op_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_op_hook</span></code></a>(callback[, monitor_all])</p></td> |
| <td><p>Install callback monitor.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Block.save_parameters" title="mxnet.gluon.nn.Block.save_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">save_parameters</span></code></a>(filename[, deduplicate])</p></td> |
| <td><p>Save parameters to file.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Block.save_params" title="mxnet.gluon.nn.Block.save_params"><code class="xref py py-obj docutils literal notranslate"><span class="pre">save_params</span></code></a>(filename)</p></td> |
| <td><p>[Deprecated] Please use save_parameters. Note that if you want load</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Block.summary" title="mxnet.gluon.nn.Block.summary"><code class="xref py py-obj docutils literal notranslate"><span class="pre">summary</span></code></a>(*inputs)</p></td> |
| <td><p>Print the summary of the model’s output and parameters.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <p><strong>Attributes</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Block.name" title="mxnet.gluon.nn.Block.name"><code class="xref py py-obj docutils literal notranslate"><span class="pre">name</span></code></a></p></td> |
| <td><p>Name of this <a class="reference internal" href="#mxnet.gluon.nn.Block" title="mxnet.gluon.nn.Block"><code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code></a>, without ‘_’ in the end.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Block.params" title="mxnet.gluon.nn.Block.params"><code class="xref py py-obj docutils literal notranslate"><span class="pre">params</span></code></a></p></td> |
| <td><p>Returns this <a class="reference internal" href="#mxnet.gluon.nn.Block" title="mxnet.gluon.nn.Block"><code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code></a>’s parameter dictionary (does not include its children’s parameters).</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Block.prefix" title="mxnet.gluon.nn.Block.prefix"><code class="xref py py-obj docutils literal notranslate"><span class="pre">prefix</span></code></a></p></td> |
| <td><p>Prefix of this <a class="reference internal" href="#mxnet.gluon.nn.Block" title="mxnet.gluon.nn.Block"><code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code></a>.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <p>Child <a class="reference internal" href="#mxnet.gluon.nn.Block" title="mxnet.gluon.nn.Block"><code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code></a> assigned this way will be registered and <a class="reference internal" href="#mxnet.gluon.nn.Block.collect_params" title="mxnet.gluon.nn.Block.collect_params"><code class="xref py py-meth docutils literal notranslate"><span class="pre">collect_params()</span></code></a> |
| will collect their Parameters recursively. You can also manually register |
| child blocks with <a class="reference internal" href="#mxnet.gluon.nn.Block.register_child" title="mxnet.gluon.nn.Block.register_child"><code class="xref py py-meth docutils literal notranslate"><span class="pre">register_child()</span></code></a>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>prefix</strong> (<em>str</em>) – Prefix acts like a name space. All children blocks created in parent block’s |
| <a class="reference internal" href="#mxnet.gluon.nn.Block.name_scope" title="mxnet.gluon.nn.Block.name_scope"><code class="xref py py-meth docutils literal notranslate"><span class="pre">name_scope()</span></code></a> will have parent block’s prefix in their name. |
| Please refer to |
| <a class="reference external" href="/api/python/docs/tutorials/packages/gluon/blocks/naming.html">naming tutorial</a> |
| for more info on prefix and naming.</p></li> |
| <li><p><strong>params</strong> (<a class="reference internal" href="../parameter_dict.html#mxnet.gluon.ParameterDict" title="mxnet.gluon.ParameterDict"><em>ParameterDict</em></a><em> or </em><em>None</em>) – <p><code class="xref py py-class docutils literal notranslate"><span class="pre">ParameterDict</span></code> for sharing weights with the new <a class="reference internal" href="#mxnet.gluon.nn.Block" title="mxnet.gluon.nn.Block"><code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code></a>. For example, |
| if you want <code class="docutils literal notranslate"><span class="pre">dense1</span></code> to share <code class="docutils literal notranslate"><span class="pre">dense0</span></code>’s weights, you can do:</p> |
| <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">dense0</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">)</span> |
| <span class="n">dense1</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">,</span> <span class="n">params</span><span class="o">=</span><span class="n">dense0</span><span class="o">.</span><span class="n">collect_params</span><span class="p">())</span> |
| </pre></div> |
| </div> |
| </p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.Block.apply"> |
| <code class="sig-name descname">apply</code><span class="sig-paren">(</span><em class="sig-param">fn</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/block.html#Block.apply"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Block.apply" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Applies <code class="docutils literal notranslate"><span class="pre">fn</span></code> recursively to every child block as well as self.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><p><strong>fn</strong> (<em>callable</em>) – Function to be applied to each submodule, of form <cite>fn(block)</cite>.</p> |
| </dd> |
| <dt class="field-even">Returns</dt> |
| <dd class="field-even"><p></p> |
| </dd> |
| <dt class="field-odd">Return type</dt> |
| <dd class="field-odd"><p>this block</p> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.Block.cast"> |
| <code class="sig-name descname">cast</code><span class="sig-paren">(</span><em class="sig-param">dtype</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/block.html#Block.cast"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Block.cast" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Cast this Block to use another data type.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><p><strong>dtype</strong> (<em>str</em><em> or </em><em>numpy.dtype</em>) – The new data type.</p> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.Block.collect_params"> |
| <code class="sig-name descname">collect_params</code><span class="sig-paren">(</span><em class="sig-param">select=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/block.html#Block.collect_params"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Block.collect_params" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Returns a <code class="xref py py-class docutils literal notranslate"><span class="pre">ParameterDict</span></code> containing this <a class="reference internal" href="#mxnet.gluon.nn.Block" title="mxnet.gluon.nn.Block"><code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code></a> and all of its |
| children’s Parameters(default), also can returns the select <code class="xref py py-class docutils literal notranslate"><span class="pre">ParameterDict</span></code> |
| which match some given regular expressions.</p> |
| <p>For example, collect the specified parameters in [‘conv1_weight’, ‘conv1_bias’, ‘fc_weight’, |
| ‘fc_bias’]:</p> |
| <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">collect_params</span><span class="p">(</span><span class="s1">'conv1_weight|conv1_bias|fc_weight|fc_bias'</span><span class="p">)</span> |
| </pre></div> |
| </div> |
| <p>or collect all parameters whose names end with ‘weight’ or ‘bias’, this can be done |
| using regular expressions:</p> |
| <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">collect_params</span><span class="p">(</span><span class="s1">'.*weight|.*bias'</span><span class="p">)</span> |
| </pre></div> |
| </div> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><p><strong>select</strong> (<em>str</em>) – regular expressions</p> |
| </dd> |
| <dt class="field-even">Returns</dt> |
| <dd class="field-even"><p></p> |
| </dd> |
| <dt class="field-odd">Return type</dt> |
| <dd class="field-odd"><p>The selected <code class="xref py py-class docutils literal notranslate"><span class="pre">ParameterDict</span></code></p> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.Block.forward"> |
| <code class="sig-name descname">forward</code><span class="sig-paren">(</span><em class="sig-param">*args</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/block.html#Block.forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Block.forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to implement forward computation using <code class="xref py py-class docutils literal notranslate"><span class="pre">NDArray</span></code>. Only |
| accepts positional arguments.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><p><strong>*args</strong> (<em>list of NDArray</em>) – Input tensors.</p> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.Block.hybridize"> |
| <code class="sig-name descname">hybridize</code><span class="sig-paren">(</span><em class="sig-param">active=True</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/block.html#Block.hybridize"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Block.hybridize" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Please refer description of HybridBlock hybridize().</p> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.Block.initialize"> |
| <code class="sig-name descname">initialize</code><span class="sig-paren">(</span><em class="sig-param">init=<mxnet.initializer.Uniform object></em>, <em class="sig-param">ctx=None</em>, <em class="sig-param">verbose=False</em>, <em class="sig-param">force_reinit=False</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/block.html#Block.initialize"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Block.initialize" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Initializes <code class="xref py py-class docutils literal notranslate"><span class="pre">Parameter</span></code> s of this <a class="reference internal" href="#mxnet.gluon.nn.Block" title="mxnet.gluon.nn.Block"><code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code></a> and its children. |
| Equivalent to <code class="docutils literal notranslate"><span class="pre">block.collect_params().initialize(...)</span></code></p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>init</strong> (<a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Global default Initializer to be used when <code class="xref py py-meth docutils literal notranslate"><span class="pre">Parameter.init()</span></code> is <code class="docutils literal notranslate"><span class="pre">None</span></code>. |
| Otherwise, <code class="xref py py-meth docutils literal notranslate"><span class="pre">Parameter.init()</span></code> takes precedence.</p></li> |
| <li><p><strong>ctx</strong> (<a class="reference internal" href="../../mxnet/context/index.html#mxnet.context.Context" title="mxnet.context.Context"><em>Context</em></a><em> or </em><em>list of Context</em>) – Keeps a copy of Parameters on one or many context(s).</p></li> |
| <li><p><strong>verbose</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to verbosely print out details on initialization.</p></li> |
| <li><p><strong>force_reinit</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to force re-initialization if parameter is already initialized.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.Block.load_parameters"> |
| <code class="sig-name descname">load_parameters</code><span class="sig-paren">(</span><em class="sig-param">filename</em>, <em class="sig-param">ctx=None</em>, <em class="sig-param">allow_missing=False</em>, <em class="sig-param">ignore_extra=False</em>, <em class="sig-param">cast_dtype=False</em>, <em class="sig-param">dtype_source='current'</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/block.html#Block.load_parameters"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Block.load_parameters" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Load parameters from file previously saved by <cite>save_parameters</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>filename</strong> (<em>str</em>) – Path to parameter file.</p></li> |
| <li><p><strong>ctx</strong> (<a class="reference internal" href="../../mxnet/context/index.html#mxnet.context.Context" title="mxnet.context.Context"><em>Context</em></a><em> or </em><em>list of Context</em><em>, </em><em>default cpu</em><em>(</em><em>)</em>) – Context(s) to initialize loaded parameters on.</p></li> |
| <li><p><strong>allow_missing</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently skip loading parameters not represents in the file.</p></li> |
| <li><p><strong>ignore_extra</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently ignore parameters from the file that are not |
| present in this Block.</p></li> |
| <li><p><strong>cast_dtype</strong> (<em>bool</em><em>, </em><em>default False</em>) – Cast the data type of the NDArray loaded from the checkpoint to the dtype |
| provided by the Parameter if any.</p></li> |
| <li><p><strong>dtype_source</strong> (<em>str</em><em>, </em><em>default 'current'</em>) – must be in {‘current’, ‘saved’} |
| Only valid if cast_dtype=True, specify the source of the dtype for casting |
| the parameters</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <p class="rubric">References</p> |
| <p><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></p> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.Block.load_params"> |
| <code class="sig-name descname">load_params</code><span class="sig-paren">(</span><em class="sig-param">filename</em>, <em class="sig-param">ctx=None</em>, <em class="sig-param">allow_missing=False</em>, <em class="sig-param">ignore_extra=False</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/block.html#Block.load_params"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Block.load_params" title="Permalink to this definition">¶</a></dt> |
| <dd><p>[Deprecated] Please use load_parameters.</p> |
| <p>Load parameters from file.</p> |
| <dl class="simple"> |
| <dt>filename<span class="classifier">str</span></dt><dd><p>Path to parameter file.</p> |
| </dd> |
| <dt>ctx<span class="classifier">Context or list of Context, default cpu()</span></dt><dd><p>Context(s) to initialize loaded parameters on.</p> |
| </dd> |
| <dt>allow_missing<span class="classifier">bool, default False</span></dt><dd><p>Whether to silently skip loading parameters not represents in the file.</p> |
| </dd> |
| <dt>ignore_extra<span class="classifier">bool, default False</span></dt><dd><p>Whether to silently ignore parameters from the file that are not |
| present in this Block.</p> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.Block.name"> |
| <em class="property">property </em><code class="sig-name descname">name</code><a class="headerlink" href="#mxnet.gluon.nn.Block.name" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Name of this <a class="reference internal" href="#mxnet.gluon.nn.Block" title="mxnet.gluon.nn.Block"><code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code></a>, without ‘_’ in the end.</p> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.Block.name_scope"> |
| <code class="sig-name descname">name_scope</code><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/block.html#Block.name_scope"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Block.name_scope" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Returns a name space object managing a child <a class="reference internal" href="#mxnet.gluon.nn.Block" title="mxnet.gluon.nn.Block"><code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code></a> and parameter |
| names. Should be used within a <code class="docutils literal notranslate"><span class="pre">with</span></code> statement:</p> |
| <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="k">with</span> <span class="bp">self</span><span class="o">.</span><span class="n">name_scope</span><span class="p">():</span> |
| <span class="bp">self</span><span class="o">.</span><span class="n">dense</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">)</span> |
| </pre></div> |
| </div> |
| <p>Please refer to |
| <a class="reference external" href="/api/python/docs/tutorials/packages/gluon/blocks/naming.html">the naming tutorial</a> |
| for more info on prefix and naming.</p> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.Block.params"> |
| <em class="property">property </em><code class="sig-name descname">params</code><a class="headerlink" href="#mxnet.gluon.nn.Block.params" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Returns this <a class="reference internal" href="#mxnet.gluon.nn.Block" title="mxnet.gluon.nn.Block"><code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code></a>’s parameter dictionary (does not include its |
| children’s parameters).</p> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.Block.prefix"> |
| <em class="property">property </em><code class="sig-name descname">prefix</code><a class="headerlink" href="#mxnet.gluon.nn.Block.prefix" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Prefix of this <a class="reference internal" href="#mxnet.gluon.nn.Block" title="mxnet.gluon.nn.Block"><code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code></a>.</p> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.Block.register_child"> |
| <code class="sig-name descname">register_child</code><span class="sig-paren">(</span><em class="sig-param">block</em>, <em class="sig-param">name=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/block.html#Block.register_child"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Block.register_child" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Registers block as a child of self. <a class="reference internal" href="#mxnet.gluon.nn.Block" title="mxnet.gluon.nn.Block"><code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code></a> s assigned to self as |
| attributes will be registered automatically.</p> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.Block.register_forward_hook"> |
| <code class="sig-name descname">register_forward_hook</code><span class="sig-paren">(</span><em class="sig-param">hook</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/block.html#Block.register_forward_hook"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Block.register_forward_hook" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Registers a forward hook on the block.</p> |
| <p>The hook function is called immediately after <a class="reference internal" href="#mxnet.gluon.nn.Block.forward" title="mxnet.gluon.nn.Block.forward"><code class="xref py py-func docutils literal notranslate"><span class="pre">forward()</span></code></a>. |
| It should not modify the input or output.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><p><strong>hook</strong> (<em>callable</em>) – The forward hook function of form <cite>hook(block, input, output) -> None</cite>.</p> |
| </dd> |
| <dt class="field-even">Returns</dt> |
| <dd class="field-even"><p></p> |
| </dd> |
| <dt class="field-odd">Return type</dt> |
| <dd class="field-odd"><p><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.utils.HookHandle</span></code></p> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.Block.register_forward_pre_hook"> |
| <code class="sig-name descname">register_forward_pre_hook</code><span class="sig-paren">(</span><em class="sig-param">hook</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/block.html#Block.register_forward_pre_hook"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Block.register_forward_pre_hook" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Registers a forward pre-hook on the block.</p> |
| <p>The hook function is called immediately before <a class="reference internal" href="#mxnet.gluon.nn.Block.forward" title="mxnet.gluon.nn.Block.forward"><code class="xref py py-func docutils literal notranslate"><span class="pre">forward()</span></code></a>. |
| It should not modify the input or output.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><p><strong>hook</strong> (<em>callable</em>) – The forward hook function of form <cite>hook(block, input) -> None</cite>.</p> |
| </dd> |
| <dt class="field-even">Returns</dt> |
| <dd class="field-even"><p></p> |
| </dd> |
| <dt class="field-odd">Return type</dt> |
| <dd class="field-odd"><p><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.utils.HookHandle</span></code></p> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.Block.register_op_hook"> |
| <code class="sig-name descname">register_op_hook</code><span class="sig-paren">(</span><em class="sig-param">callback</em>, <em class="sig-param">monitor_all=False</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/block.html#Block.register_op_hook"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Block.register_op_hook" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Install callback monitor.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>callback</strong> (<em>function</em>) – Takes a string and a NDArrayHandle.</p></li> |
| <li><p><strong>monitor_all</strong> (<em>bool</em><em>, </em><em>default False</em>) – If true, monitor both input and output, otherwise monitor output only.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.Block.save_parameters"> |
| <code class="sig-name descname">save_parameters</code><span class="sig-paren">(</span><em class="sig-param">filename</em>, <em class="sig-param">deduplicate=False</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/block.html#Block.save_parameters"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Block.save_parameters" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Save parameters to file.</p> |
| <p>Saved parameters can only be loaded with <cite>load_parameters</cite>. Note that this |
| method only saves parameters, not model structure. If you want to save |
| model structures, please use <a class="reference internal" href="#mxnet.gluon.nn.HybridBlock.export" title="mxnet.gluon.nn.HybridBlock.export"><code class="xref py py-meth docutils literal notranslate"><span class="pre">HybridBlock.export()</span></code></a>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>filename</strong> (<em>str</em>) – Path to file.</p></li> |
| <li><p><strong>deduplicate</strong> (<em>bool</em><em>, </em><em>default False</em>) – If True, save shared parameters only once. Otherwise, if a Block |
| contains multiple sub-blocks that share parameters, each of the |
| shared parameters will be separately saved for every sub-block.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <p class="rubric">References</p> |
| <p><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></p> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.Block.save_params"> |
| <code class="sig-name descname">save_params</code><span class="sig-paren">(</span><em class="sig-param">filename</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/block.html#Block.save_params"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Block.save_params" title="Permalink to this definition">¶</a></dt> |
| <dd><p>[Deprecated] Please use save_parameters. Note that if you want load |
| from SymbolBlock later, please use export instead.</p> |
| <p>Save parameters to file.</p> |
| <dl class="simple"> |
| <dt>filename<span class="classifier">str</span></dt><dd><p>Path to file.</p> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.Block.summary"> |
| <code class="sig-name descname">summary</code><span class="sig-paren">(</span><em class="sig-param">*inputs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/block.html#Block.summary"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Block.summary" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Print the summary of the model’s output and parameters.</p> |
| <p>The network must have been initialized, and must not have been hybridized.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><p><strong>inputs</strong> (<em>object</em>) – Any input that the model supports. For any tensor in the input, only |
| <a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.ndarray.NDArray</span></code></a> is supported.</p> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.Conv1D"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">Conv1D</code><span class="sig-paren">(</span><em class="sig-param">channels</em>, <em class="sig-param">kernel_size</em>, <em class="sig-param">strides=1</em>, <em class="sig-param">padding=0</em>, <em class="sig-param">dilation=1</em>, <em class="sig-param">groups=1</em>, <em class="sig-param">layout='NCW'</em>, <em class="sig-param">activation=None</em>, <em class="sig-param">use_bias=True</em>, <em class="sig-param">weight_initializer=None</em>, <em class="sig-param">bias_initializer='zeros'</em>, <em class="sig-param">in_channels=0</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/conv_layers.html#Conv1D"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Conv1D" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.nn.conv_layers._Conv</span></code></p> |
| <p>1D convolution layer (e.g. temporal convolution).</p> |
| <p>This layer creates a convolution kernel that is convolved |
| with the layer input over a single spatial (or temporal) dimension |
| to produce a tensor of outputs. |
| If <cite>use_bias</cite> is True, a bias vector is created and added to the outputs. |
| Finally, if <cite>activation</cite> is not <cite>None</cite>, |
| it is applied to the outputs as well.</p> |
| <p>If <cite>in_channels</cite> is not specified, <cite>Parameter</cite> initialization will be |
| deferred to the first time <cite>forward</cite> is called and <cite>in_channels</cite> will be |
| inferred from the shape of input data.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>channels</strong> (<em>int</em>) – The dimensionality of the output space, i.e. the number of output |
| channels (filters) in the convolution.</p></li> |
| <li><p><strong>kernel_size</strong> (<em>int</em><em> or </em><em>tuple/list of 1 int</em>) – Specifies the dimensions of the convolution window.</p></li> |
| <li><p><strong>strides</strong> (<em>int</em><em> or </em><em>tuple/list of 1 int</em><em>,</em>) – Specify the strides of the convolution.</p></li> |
| <li><p><strong>padding</strong> (<em>int</em><em> or </em><em>a tuple/list of 1 int</em><em>,</em>) – If padding is non-zero, then the input is implicitly zero-padded |
| on both sides for padding number of points</p></li> |
| <li><p><strong>dilation</strong> (<em>int</em><em> or </em><em>tuple/list of 1 int</em>) – Specifies the dilation rate to use for dilated convolution.</p></li> |
| <li><p><strong>groups</strong> (<em>int</em>) – Controls the connections between inputs and outputs. |
| At groups=1, all inputs are convolved to all outputs. |
| At groups=2, the operation becomes equivalent to having two conv |
| layers side by side, each seeing half the input channels, and producing |
| half the output channels, and both subsequently concatenated.</p></li> |
| <li><p><strong>layout</strong> (<em>str</em><em>, </em><em>default 'NCW'</em>) – Dimension ordering of data and weight. Only supports ‘NCW’ layout for now. |
| ‘N’, ‘C’, ‘W’ stands for batch, channel, and width (time) dimensions |
| respectively. Convolution is applied on the ‘W’ dimension.</p></li> |
| <li><p><strong>in_channels</strong> (<em>int</em><em>, </em><em>default 0</em>) – The number of input channels to this layer. If not specified, |
| initialization will be deferred to the first time <cite>forward</cite> is called |
| and <cite>in_channels</cite> will be inferred from the shape of input data.</p></li> |
| <li><p><strong>activation</strong> (<em>str</em>) – Activation function to use. See <a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.Activation" title="mxnet.ndarray.Activation"><code class="xref py py-func docutils literal notranslate"><span class="pre">Activation()</span></code></a>. |
| If you don’t specify anything, no activation is applied |
| (ie. “linear” activation: <cite>a(x) = x</cite>).</p></li> |
| <li><p><strong>use_bias</strong> (<em>bool</em>) – Whether the layer uses a bias vector.</p></li> |
| <li><p><strong>weight_initializer</strong> (str or <cite>Initializer</cite>) – Initializer for the <cite>weight</cite> weights matrix.</p></li> |
| <li><p><strong>bias_initializer</strong> (str or <cite>Initializer</cite>) – Initializer for the bias vector.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>data</strong>: 3D input tensor with shape <cite>(batch_size, in_channels, width)</cite> |
| when <cite>layout</cite> is <cite>NCW</cite>. For other layouts shape is permuted accordingly.</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul> |
| <li><p><strong>out</strong>: 3D output tensor with shape <cite>(batch_size, channels, out_width)</cite> |
| when <cite>layout</cite> is <cite>NCW</cite>. out_width is calculated as:</p> |
| <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">out_width</span> <span class="o">=</span> <span class="n">floor</span><span class="p">((</span><span class="n">width</span><span class="o">+</span><span class="mi">2</span><span class="o">*</span><span class="n">padding</span><span class="o">-</span><span class="n">dilation</span><span class="o">*</span><span class="p">(</span><span class="n">kernel_size</span><span class="o">-</span><span class="mi">1</span><span class="p">)</span><span class="o">-</span><span class="mi">1</span><span class="p">)</span><span class="o">/</span><span class="n">stride</span><span class="p">)</span><span class="o">+</span><span class="mi">1</span> |
| </pre></div> |
| </div> |
| </li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.Conv1DTranspose"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">Conv1DTranspose</code><span class="sig-paren">(</span><em class="sig-param">channels</em>, <em class="sig-param">kernel_size</em>, <em class="sig-param">strides=1</em>, <em class="sig-param">padding=0</em>, <em class="sig-param">output_padding=0</em>, <em class="sig-param">dilation=1</em>, <em class="sig-param">groups=1</em>, <em class="sig-param">layout='NCW'</em>, <em class="sig-param">activation=None</em>, <em class="sig-param">use_bias=True</em>, <em class="sig-param">weight_initializer=None</em>, <em class="sig-param">bias_initializer='zeros'</em>, <em class="sig-param">in_channels=0</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/conv_layers.html#Conv1DTranspose"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Conv1DTranspose" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.nn.conv_layers._Conv</span></code></p> |
| <p>Transposed 1D convolution layer (sometimes called Deconvolution).</p> |
| <p>The need for transposed convolutions generally arises |
| from the desire to use a transformation going in the opposite direction |
| of a normal convolution, i.e., from something that has the shape of the |
| output of some convolution to something that has the shape of its input |
| while maintaining a connectivity pattern that is compatible with |
| said convolution.</p> |
| <p>If <cite>in_channels</cite> is not specified, <cite>Parameter</cite> initialization will be |
| deferred to the first time <cite>forward</cite> is called and <cite>in_channels</cite> will be |
| inferred from the shape of input data.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>channels</strong> (<em>int</em>) – The dimensionality of the output space, i.e. the number of output |
| channels (filters) in the convolution.</p></li> |
| <li><p><strong>kernel_size</strong> (<em>int</em><em> or </em><em>tuple/list of 1 int</em>) – Specifies the dimensions of the convolution window.</p></li> |
| <li><p><strong>strides</strong> (<em>int</em><em> or </em><em>tuple/list of 1 int</em>) – Specify the strides of the convolution.</p></li> |
| <li><p><strong>padding</strong> (<em>int</em><em> or </em><em>a tuple/list of 1 int</em><em>,</em>) – If padding is non-zero, then the input is implicitly zero-padded |
| on both sides for padding number of points</p></li> |
| <li><p><strong>output_padding</strong> (<em>int</em><em> or </em><em>a tuple/list of 1 int</em>) – Controls the amount of implicit zero-paddings on both sides of the |
| output for output_padding number of points for each dimension.</p></li> |
| <li><p><strong>dilation</strong> (<em>int</em><em> or </em><em>tuple/list of 1 int</em>) – Controls the spacing between the kernel points; also known as the |
| a trous algorithm</p></li> |
| <li><p><strong>groups</strong> (<em>int</em>) – Controls the connections between inputs and outputs. |
| At groups=1, all inputs are convolved to all outputs. |
| At groups=2, the operation becomes equivalent to having two conv |
| layers side by side, each seeing half the input channels, and producing |
| half the output channels, and both subsequently concatenated.</p></li> |
| <li><p><strong>layout</strong> (<em>str</em><em>, </em><em>default 'NCW'</em>) – Dimension ordering of data and weight. Only supports ‘NCW’ layout for now. |
| ‘N’, ‘C’, ‘W’ stands for batch, channel, and width (time) dimensions |
| respectively. Convolution is applied on the ‘W’ dimension.</p></li> |
| <li><p><strong>in_channels</strong> (<em>int</em><em>, </em><em>default 0</em>) – The number of input channels to this layer. If not specified, |
| initialization will be deferred to the first time <cite>forward</cite> is called |
| and <cite>in_channels</cite> will be inferred from the shape of input data.</p></li> |
| <li><p><strong>activation</strong> (<em>str</em>) – Activation function to use. See <a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.Activation" title="mxnet.ndarray.Activation"><code class="xref py py-func docutils literal notranslate"><span class="pre">Activation()</span></code></a>. |
| If you don’t specify anything, no activation is applied |
| (ie. “linear” activation: <cite>a(x) = x</cite>).</p></li> |
| <li><p><strong>use_bias</strong> (<em>bool</em>) – Whether the layer uses a bias vector.</p></li> |
| <li><p><strong>weight_initializer</strong> (str or <cite>Initializer</cite>) – Initializer for the <cite>weight</cite> weights matrix.</p></li> |
| <li><p><strong>bias_initializer</strong> (str or <cite>Initializer</cite>) – Initializer for the bias vector.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>data</strong>: 3D input tensor with shape <cite>(batch_size, in_channels, width)</cite> |
| when <cite>layout</cite> is <cite>NCW</cite>. For other layouts shape is permuted accordingly.</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul> |
| <li><p><strong>out</strong>: 3D output tensor with shape <cite>(batch_size, channels, out_width)</cite> |
| when <cite>layout</cite> is <cite>NCW</cite>. out_width is calculated as:</p> |
| <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">out_width</span> <span class="o">=</span> <span class="p">(</span><span class="n">width</span><span class="o">-</span><span class="mi">1</span><span class="p">)</span><span class="o">*</span><span class="n">strides</span><span class="o">-</span><span class="mi">2</span><span class="o">*</span><span class="n">padding</span><span class="o">+</span><span class="n">kernel_size</span><span class="o">+</span><span class="n">output_padding</span> |
| </pre></div> |
| </div> |
| </li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.Conv2D"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">Conv2D</code><span class="sig-paren">(</span><em class="sig-param">channels</em>, <em class="sig-param">kernel_size</em>, <em class="sig-param">strides=(1</em>, <em class="sig-param">1)</em>, <em class="sig-param">padding=(0</em>, <em class="sig-param">0)</em>, <em class="sig-param">dilation=(1</em>, <em class="sig-param">1)</em>, <em class="sig-param">groups=1</em>, <em class="sig-param">layout='NCHW'</em>, <em class="sig-param">activation=None</em>, <em class="sig-param">use_bias=True</em>, <em class="sig-param">weight_initializer=None</em>, <em class="sig-param">bias_initializer='zeros'</em>, <em class="sig-param">in_channels=0</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/conv_layers.html#Conv2D"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Conv2D" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.nn.conv_layers._Conv</span></code></p> |
| <p>2D convolution layer (e.g. spatial convolution over images).</p> |
| <p>This layer creates a convolution kernel that is convolved |
| with the layer input to produce a tensor of |
| outputs. If <cite>use_bias</cite> is True, |
| a bias vector is created and added to the outputs. Finally, if |
| <cite>activation</cite> is not <cite>None</cite>, it is applied to the outputs as well.</p> |
| <p>If <cite>in_channels</cite> is not specified, <cite>Parameter</cite> initialization will be |
| deferred to the first time <cite>forward</cite> is called and <cite>in_channels</cite> will be |
| inferred from the shape of input data.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>channels</strong> (<em>int</em>) – The dimensionality of the output space, i.e. the number of output |
| channels (filters) in the convolution.</p></li> |
| <li><p><strong>kernel_size</strong> (<em>int</em><em> or </em><em>tuple/list of 2 int</em>) – Specifies the dimensions of the convolution window.</p></li> |
| <li><p><strong>strides</strong> (<em>int</em><em> or </em><em>tuple/list of 2 int</em><em>,</em>) – Specify the strides of the convolution.</p></li> |
| <li><p><strong>padding</strong> (<em>int</em><em> or </em><em>a tuple/list of 2 int</em><em>,</em>) – If padding is non-zero, then the input is implicitly zero-padded |
| on both sides for padding number of points</p></li> |
| <li><p><strong>dilation</strong> (<em>int</em><em> or </em><em>tuple/list of 2 int</em>) – Specifies the dilation rate to use for dilated convolution.</p></li> |
| <li><p><strong>groups</strong> (<em>int</em>) – Controls the connections between inputs and outputs. |
| At groups=1, all inputs are convolved to all outputs. |
| At groups=2, the operation becomes equivalent to having two conv |
| layers side by side, each seeing half the input channels, and producing |
| half the output channels, and both subsequently concatenated.</p></li> |
| <li><p><strong>layout</strong> (<em>str</em><em>, </em><em>default 'NCHW'</em>) – Dimension ordering of data and weight. Only supports ‘NCHW’ and ‘NHWC’ |
| layout for now. ‘N’, ‘C’, ‘H’, ‘W’ stands for batch, channel, height, |
| and width dimensions respectively. Convolution is applied on the ‘H’ and |
| ‘W’ dimensions.</p></li> |
| <li><p><strong>in_channels</strong> (<em>int</em><em>, </em><em>default 0</em>) – The number of input channels to this layer. If not specified, |
| initialization will be deferred to the first time <cite>forward</cite> is called |
| and <cite>in_channels</cite> will be inferred from the shape of input data.</p></li> |
| <li><p><strong>activation</strong> (<em>str</em>) – Activation function to use. See <a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.Activation" title="mxnet.ndarray.Activation"><code class="xref py py-func docutils literal notranslate"><span class="pre">Activation()</span></code></a>. |
| If you don’t specify anything, no activation is applied |
| (ie. “linear” activation: <cite>a(x) = x</cite>).</p></li> |
| <li><p><strong>use_bias</strong> (<em>bool</em>) – Whether the layer uses a bias vector.</p></li> |
| <li><p><strong>weight_initializer</strong> (str or <cite>Initializer</cite>) – Initializer for the <cite>weight</cite> weights matrix.</p></li> |
| <li><p><strong>bias_initializer</strong> (str or <cite>Initializer</cite>) – Initializer for the bias vector.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>data</strong>: 4D input tensor with shape |
| <cite>(batch_size, in_channels, height, width)</cite> when <cite>layout</cite> is <cite>NCHW</cite>. |
| For other layouts shape is permuted accordingly.</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul> |
| <li><p><strong>out</strong>: 4D output tensor with shape |
| <cite>(batch_size, channels, out_height, out_width)</cite> when <cite>layout</cite> is <cite>NCHW</cite>. |
| out_height and out_width are calculated as:</p> |
| <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">out_height</span> <span class="o">=</span> <span class="n">floor</span><span class="p">((</span><span class="n">height</span><span class="o">+</span><span class="mi">2</span><span class="o">*</span><span class="n">padding</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">-</span><span class="n">dilation</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">*</span><span class="p">(</span><span class="n">kernel_size</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">-</span><span class="mi">1</span><span class="p">)</span><span class="o">-</span><span class="mi">1</span><span class="p">)</span><span class="o">/</span><span class="n">stride</span><span class="p">[</span><span class="mi">0</span><span class="p">])</span><span class="o">+</span><span class="mi">1</span> |
| <span class="n">out_width</span> <span class="o">=</span> <span class="n">floor</span><span class="p">((</span><span class="n">width</span><span class="o">+</span><span class="mi">2</span><span class="o">*</span><span class="n">padding</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span><span class="o">-</span><span class="n">dilation</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span><span class="o">*</span><span class="p">(</span><span class="n">kernel_size</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span><span class="o">-</span><span class="mi">1</span><span class="p">)</span><span class="o">-</span><span class="mi">1</span><span class="p">)</span><span class="o">/</span><span class="n">stride</span><span class="p">[</span><span class="mi">1</span><span class="p">])</span><span class="o">+</span><span class="mi">1</span> |
| </pre></div> |
| </div> |
| </li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.Conv2DTranspose"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">Conv2DTranspose</code><span class="sig-paren">(</span><em class="sig-param">channels</em>, <em class="sig-param">kernel_size</em>, <em class="sig-param">strides=(1</em>, <em class="sig-param">1)</em>, <em class="sig-param">padding=(0</em>, <em class="sig-param">0)</em>, <em class="sig-param">output_padding=(0</em>, <em class="sig-param">0)</em>, <em class="sig-param">dilation=(1</em>, <em class="sig-param">1)</em>, <em class="sig-param">groups=1</em>, <em class="sig-param">layout='NCHW'</em>, <em class="sig-param">activation=None</em>, <em class="sig-param">use_bias=True</em>, <em class="sig-param">weight_initializer=None</em>, <em class="sig-param">bias_initializer='zeros'</em>, <em class="sig-param">in_channels=0</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/conv_layers.html#Conv2DTranspose"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Conv2DTranspose" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.nn.conv_layers._Conv</span></code></p> |
| <p>Transposed 2D convolution layer (sometimes called Deconvolution).</p> |
| <p>The need for transposed convolutions generally arises |
| from the desire to use a transformation going in the opposite direction |
| of a normal convolution, i.e., from something that has the shape of the |
| output of some convolution to something that has the shape of its input |
| while maintaining a connectivity pattern that is compatible with |
| said convolution.</p> |
| <p>If <cite>in_channels</cite> is not specified, <cite>Parameter</cite> initialization will be |
| deferred to the first time <cite>forward</cite> is called and <cite>in_channels</cite> will be |
| inferred from the shape of input data.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>channels</strong> (<em>int</em>) – The dimensionality of the output space, i.e. the number of output |
| channels (filters) in the convolution.</p></li> |
| <li><p><strong>kernel_size</strong> (<em>int</em><em> or </em><em>tuple/list of 2 int</em>) – Specifies the dimensions of the convolution window.</p></li> |
| <li><p><strong>strides</strong> (<em>int</em><em> or </em><em>tuple/list of 2 int</em>) – Specify the strides of the convolution.</p></li> |
| <li><p><strong>padding</strong> (<em>int</em><em> or </em><em>a tuple/list of 2 int</em><em>,</em>) – If padding is non-zero, then the input is implicitly zero-padded |
| on both sides for padding number of points</p></li> |
| <li><p><strong>output_padding</strong> (<em>int</em><em> or </em><em>a tuple/list of 2 int</em>) – Controls the amount of implicit zero-paddings on both sides of the |
| output for output_padding number of points for each dimension.</p></li> |
| <li><p><strong>dilation</strong> (<em>int</em><em> or </em><em>tuple/list of 2 int</em>) – Controls the spacing between the kernel points; also known as the |
| a trous algorithm</p></li> |
| <li><p><strong>groups</strong> (<em>int</em>) – Controls the connections between inputs and outputs. |
| At groups=1, all inputs are convolved to all outputs. |
| At groups=2, the operation becomes equivalent to having two conv |
| layers side by side, each seeing half the input channels, and producing |
| half the output channels, and both subsequently concatenated.</p></li> |
| <li><p><strong>layout</strong> (<em>str</em><em>, </em><em>default 'NCHW'</em>) – Dimension ordering of data and weight. Only supports ‘NCHW’ and ‘NHWC’ |
| layout for now. ‘N’, ‘C’, ‘H’, ‘W’ stands for batch, channel, height, |
| and width dimensions respectively. Convolution is applied on the ‘H’ and |
| ‘W’ dimensions.</p></li> |
| <li><p><strong>in_channels</strong> (<em>int</em><em>, </em><em>default 0</em>) – The number of input channels to this layer. If not specified, |
| initialization will be deferred to the first time <cite>forward</cite> is called |
| and <cite>in_channels</cite> will be inferred from the shape of input data.</p></li> |
| <li><p><strong>activation</strong> (<em>str</em>) – Activation function to use. See <a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.Activation" title="mxnet.ndarray.Activation"><code class="xref py py-func docutils literal notranslate"><span class="pre">Activation()</span></code></a>. |
| If you don’t specify anything, no activation is applied |
| (ie. “linear” activation: <cite>a(x) = x</cite>).</p></li> |
| <li><p><strong>use_bias</strong> (<em>bool</em>) – Whether the layer uses a bias vector.</p></li> |
| <li><p><strong>weight_initializer</strong> (str or <cite>Initializer</cite>) – Initializer for the <cite>weight</cite> weights matrix.</p></li> |
| <li><p><strong>bias_initializer</strong> (str or <cite>Initializer</cite>) – Initializer for the bias vector.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>data</strong>: 4D input tensor with shape |
| <cite>(batch_size, in_channels, height, width)</cite> when <cite>layout</cite> is <cite>NCHW</cite>. |
| For other layouts shape is permuted accordingly.</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul> |
| <li><p><strong>out</strong>: 4D output tensor with shape |
| <cite>(batch_size, channels, out_height, out_width)</cite> when <cite>layout</cite> is <cite>NCHW</cite>. |
| out_height and out_width are calculated as:</p> |
| <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">out_height</span> <span class="o">=</span> <span class="p">(</span><span class="n">height</span><span class="o">-</span><span class="mi">1</span><span class="p">)</span><span class="o">*</span><span class="n">strides</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">-</span><span class="mi">2</span><span class="o">*</span><span class="n">padding</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">+</span><span class="n">kernel_size</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">+</span><span class="n">output_padding</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> |
| <span class="n">out_width</span> <span class="o">=</span> <span class="p">(</span><span class="n">width</span><span class="o">-</span><span class="mi">1</span><span class="p">)</span><span class="o">*</span><span class="n">strides</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span><span class="o">-</span><span class="mi">2</span><span class="o">*</span><span class="n">padding</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span><span class="o">+</span><span class="n">kernel_size</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span><span class="o">+</span><span class="n">output_padding</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span> |
| </pre></div> |
| </div> |
| </li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.Conv3D"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">Conv3D</code><span class="sig-paren">(</span><em class="sig-param">channels</em>, <em class="sig-param">kernel_size</em>, <em class="sig-param">strides=(1</em>, <em class="sig-param">1</em>, <em class="sig-param">1)</em>, <em class="sig-param">padding=(0</em>, <em class="sig-param">0</em>, <em class="sig-param">0)</em>, <em class="sig-param">dilation=(1</em>, <em class="sig-param">1</em>, <em class="sig-param">1)</em>, <em class="sig-param">groups=1</em>, <em class="sig-param">layout='NCDHW'</em>, <em class="sig-param">activation=None</em>, <em class="sig-param">use_bias=True</em>, <em class="sig-param">weight_initializer=None</em>, <em class="sig-param">bias_initializer='zeros'</em>, <em class="sig-param">in_channels=0</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/conv_layers.html#Conv3D"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Conv3D" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.nn.conv_layers._Conv</span></code></p> |
| <p>3D convolution layer (e.g. spatial convolution over volumes).</p> |
| <p>This layer creates a convolution kernel that is convolved |
| with the layer input to produce a tensor of |
| outputs. If <cite>use_bias</cite> is <cite>True</cite>, |
| a bias vector is created and added to the outputs. Finally, if |
| <cite>activation</cite> is not <cite>None</cite>, it is applied to the outputs as well.</p> |
| <p>If <cite>in_channels</cite> is not specified, <cite>Parameter</cite> initialization will be |
| deferred to the first time <cite>forward</cite> is called and <cite>in_channels</cite> will be |
| inferred from the shape of input data.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>channels</strong> (<em>int</em>) – The dimensionality of the output space, i.e. the number of output |
| channels (filters) in the convolution.</p></li> |
| <li><p><strong>kernel_size</strong> (<em>int</em><em> or </em><em>tuple/list of 3 int</em>) – Specifies the dimensions of the convolution window.</p></li> |
| <li><p><strong>strides</strong> (<em>int</em><em> or </em><em>tuple/list of 3 int</em><em>,</em>) – Specify the strides of the convolution.</p></li> |
| <li><p><strong>padding</strong> (<em>int</em><em> or </em><em>a tuple/list of 3 int</em><em>,</em>) – If padding is non-zero, then the input is implicitly zero-padded |
| on both sides for padding number of points</p></li> |
| <li><p><strong>dilation</strong> (<em>int</em><em> or </em><em>tuple/list of 3 int</em>) – Specifies the dilation rate to use for dilated convolution.</p></li> |
| <li><p><strong>groups</strong> (<em>int</em>) – Controls the connections between inputs and outputs. |
| At groups=1, all inputs are convolved to all outputs. |
| At groups=2, the operation becomes equivalent to having two conv |
| layers side by side, each seeing half the input channels, and producing |
| half the output channels, and both subsequently concatenated.</p></li> |
| <li><p><strong>layout</strong> (<em>str</em><em>, </em><em>default 'NCDHW'</em>) – Dimension ordering of data and weight. Only supports ‘NCDHW’ and ‘NDHWC’ |
| layout for now. ‘N’, ‘C’, ‘H’, ‘W’, ‘D’ stands for batch, channel, height, |
| width and depth dimensions respectively. Convolution is applied on the ‘D’, |
| ‘H’ and ‘W’ dimensions.</p></li> |
| <li><p><strong>in_channels</strong> (<em>int</em><em>, </em><em>default 0</em>) – The number of input channels to this layer. If not specified, |
| initialization will be deferred to the first time <cite>forward</cite> is called |
| and <cite>in_channels</cite> will be inferred from the shape of input data.</p></li> |
| <li><p><strong>activation</strong> (<em>str</em>) – Activation function to use. See <a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.Activation" title="mxnet.ndarray.Activation"><code class="xref py py-func docutils literal notranslate"><span class="pre">Activation()</span></code></a>. |
| If you don’t specify anything, no activation is applied |
| (ie. “linear” activation: <cite>a(x) = x</cite>).</p></li> |
| <li><p><strong>use_bias</strong> (<em>bool</em>) – Whether the layer uses a bias vector.</p></li> |
| <li><p><strong>weight_initializer</strong> (str or <cite>Initializer</cite>) – Initializer for the <cite>weight</cite> weights matrix.</p></li> |
| <li><p><strong>bias_initializer</strong> (str or <cite>Initializer</cite>) – Initializer for the bias vector.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>data</strong>: 5D input tensor with shape |
| <cite>(batch_size, in_channels, depth, height, width)</cite> when <cite>layout</cite> is <cite>NCDHW</cite>. |
| For other layouts shape is permuted accordingly.</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul> |
| <li><p><strong>out</strong>: 5D output tensor with shape |
| <cite>(batch_size, channels, out_depth, out_height, out_width)</cite> when <cite>layout</cite> is <cite>NCDHW</cite>. |
| out_depth, out_height and out_width are calculated as:</p> |
| <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">out_depth</span> <span class="o">=</span> <span class="n">floor</span><span class="p">((</span><span class="n">depth</span><span class="o">+</span><span class="mi">2</span><span class="o">*</span><span class="n">padding</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">-</span><span class="n">dilation</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">*</span><span class="p">(</span><span class="n">kernel_size</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">-</span><span class="mi">1</span><span class="p">)</span><span class="o">-</span><span class="mi">1</span><span class="p">)</span><span class="o">/</span><span class="n">stride</span><span class="p">[</span><span class="mi">0</span><span class="p">])</span><span class="o">+</span><span class="mi">1</span> |
| <span class="n">out_height</span> <span class="o">=</span> <span class="n">floor</span><span class="p">((</span><span class="n">height</span><span class="o">+</span><span class="mi">2</span><span class="o">*</span><span class="n">padding</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span><span class="o">-</span><span class="n">dilation</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span><span class="o">*</span><span class="p">(</span><span class="n">kernel_size</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span><span class="o">-</span><span class="mi">1</span><span class="p">)</span><span class="o">-</span><span class="mi">1</span><span class="p">)</span><span class="o">/</span><span class="n">stride</span><span class="p">[</span><span class="mi">1</span><span class="p">])</span><span class="o">+</span><span class="mi">1</span> |
| <span class="n">out_width</span> <span class="o">=</span> <span class="n">floor</span><span class="p">((</span><span class="n">width</span><span class="o">+</span><span class="mi">2</span><span class="o">*</span><span class="n">padding</span><span class="p">[</span><span class="mi">2</span><span class="p">]</span><span class="o">-</span><span class="n">dilation</span><span class="p">[</span><span class="mi">2</span><span class="p">]</span><span class="o">*</span><span class="p">(</span><span class="n">kernel_size</span><span class="p">[</span><span class="mi">2</span><span class="p">]</span><span class="o">-</span><span class="mi">1</span><span class="p">)</span><span class="o">-</span><span class="mi">1</span><span class="p">)</span><span class="o">/</span><span class="n">stride</span><span class="p">[</span><span class="mi">2</span><span class="p">])</span><span class="o">+</span><span class="mi">1</span> |
| </pre></div> |
| </div> |
| </li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.Conv3DTranspose"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">Conv3DTranspose</code><span class="sig-paren">(</span><em class="sig-param">channels</em>, <em class="sig-param">kernel_size</em>, <em class="sig-param">strides=(1</em>, <em class="sig-param">1</em>, <em class="sig-param">1)</em>, <em class="sig-param">padding=(0</em>, <em class="sig-param">0</em>, <em class="sig-param">0)</em>, <em class="sig-param">output_padding=(0</em>, <em class="sig-param">0</em>, <em class="sig-param">0)</em>, <em class="sig-param">dilation=(1</em>, <em class="sig-param">1</em>, <em class="sig-param">1)</em>, <em class="sig-param">groups=1</em>, <em class="sig-param">layout='NCDHW'</em>, <em class="sig-param">activation=None</em>, <em class="sig-param">use_bias=True</em>, <em class="sig-param">weight_initializer=None</em>, <em class="sig-param">bias_initializer='zeros'</em>, <em class="sig-param">in_channels=0</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/conv_layers.html#Conv3DTranspose"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Conv3DTranspose" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.nn.conv_layers._Conv</span></code></p> |
| <p>Transposed 3D convolution layer (sometimes called Deconvolution).</p> |
| <p>The need for transposed convolutions generally arises |
| from the desire to use a transformation going in the opposite direction |
| of a normal convolution, i.e., from something that has the shape of the |
| output of some convolution to something that has the shape of its input |
| while maintaining a connectivity pattern that is compatible with |
| said convolution.</p> |
| <p>If <cite>in_channels</cite> is not specified, <cite>Parameter</cite> initialization will be |
| deferred to the first time <cite>forward</cite> is called and <cite>in_channels</cite> will be |
| inferred from the shape of input data.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>channels</strong> (<em>int</em>) – The dimensionality of the output space, i.e. the number of output |
| channels (filters) in the convolution.</p></li> |
| <li><p><strong>kernel_size</strong> (<em>int</em><em> or </em><em>tuple/list of 3 int</em>) – Specifies the dimensions of the convolution window.</p></li> |
| <li><p><strong>strides</strong> (<em>int</em><em> or </em><em>tuple/list of 3 int</em>) – Specify the strides of the convolution.</p></li> |
| <li><p><strong>padding</strong> (<em>int</em><em> or </em><em>a tuple/list of 3 int</em><em>,</em>) – If padding is non-zero, then the input is implicitly zero-padded |
| on both sides for padding number of points</p></li> |
| <li><p><strong>output_padding</strong> (<em>int</em><em> or </em><em>a tuple/list of 3 int</em>) – Controls the amount of implicit zero-paddings on both sides of the |
| output for output_padding number of points for each dimension.</p></li> |
| <li><p><strong>dilation</strong> (<em>int</em><em> or </em><em>tuple/list of 3 int</em>) – Controls the spacing between the kernel points; also known as the |
| a trous algorithm.</p></li> |
| <li><p><strong>groups</strong> (<em>int</em>) – Controls the connections between inputs and outputs. |
| At groups=1, all inputs are convolved to all outputs. |
| At groups=2, the operation becomes equivalent to having two conv |
| layers side by side, each seeing half the input channels, and producing |
| half the output channels, and both subsequently concatenated.</p></li> |
| <li><p><strong>layout</strong> (<em>str</em><em>, </em><em>default 'NCDHW'</em>) – Dimension ordering of data and weight. Only supports ‘NCDHW’ and ‘NDHWC’ |
| layout for now. ‘N’, ‘C’, ‘H’, ‘W’, ‘D’ stands for batch, channel, height, |
| width and depth dimensions respectively. Convolution is applied on the ‘D’, |
| ‘H’ and ‘W’ dimensions.</p></li> |
| <li><p><strong>in_channels</strong> (<em>int</em><em>, </em><em>default 0</em>) – The number of input channels to this layer. If not specified, |
| initialization will be deferred to the first time <cite>forward</cite> is called |
| and <cite>in_channels</cite> will be inferred from the shape of input data.</p></li> |
| <li><p><strong>activation</strong> (<em>str</em>) – Activation function to use. See <a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.Activation" title="mxnet.ndarray.Activation"><code class="xref py py-func docutils literal notranslate"><span class="pre">Activation()</span></code></a>. |
| If you don’t specify anything, no activation is applied |
| (ie. “linear” activation: <cite>a(x) = x</cite>).</p></li> |
| <li><p><strong>use_bias</strong> (<em>bool</em>) – Whether the layer uses a bias vector.</p></li> |
| <li><p><strong>weight_initializer</strong> (str or <cite>Initializer</cite>) – Initializer for the <cite>weight</cite> weights matrix.</p></li> |
| <li><p><strong>bias_initializer</strong> (str or <cite>Initializer</cite>) – Initializer for the bias vector.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>data</strong>: 5D input tensor with shape |
| <cite>(batch_size, in_channels, depth, height, width)</cite> when <cite>layout</cite> is <cite>NCDHW</cite>. |
| For other layouts shape is permuted accordingly.</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul> |
| <li><p><strong>out</strong>: 5D output tensor with shape |
| <cite>(batch_size, channels, out_depth, out_height, out_width)</cite> when <cite>layout</cite> is <cite>NCDHW</cite>. |
| out_depth, out_height and out_width are calculated as:</p> |
| <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">out_depth</span> <span class="o">=</span> <span class="p">(</span><span class="n">depth</span><span class="o">-</span><span class="mi">1</span><span class="p">)</span><span class="o">*</span><span class="n">strides</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">-</span><span class="mi">2</span><span class="o">*</span><span class="n">padding</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">+</span><span class="n">kernel_size</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">+</span><span class="n">output_padding</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> |
| <span class="n">out_height</span> <span class="o">=</span> <span class="p">(</span><span class="n">height</span><span class="o">-</span><span class="mi">1</span><span class="p">)</span><span class="o">*</span><span class="n">strides</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span><span class="o">-</span><span class="mi">2</span><span class="o">*</span><span class="n">padding</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span><span class="o">+</span><span class="n">kernel_size</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span><span class="o">+</span><span class="n">output_padding</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span> |
| <span class="n">out_width</span> <span class="o">=</span> <span class="p">(</span><span class="n">width</span><span class="o">-</span><span class="mi">1</span><span class="p">)</span><span class="o">*</span><span class="n">strides</span><span class="p">[</span><span class="mi">2</span><span class="p">]</span><span class="o">-</span><span class="mi">2</span><span class="o">*</span><span class="n">padding</span><span class="p">[</span><span class="mi">2</span><span class="p">]</span><span class="o">+</span><span class="n">kernel_size</span><span class="p">[</span><span class="mi">2</span><span class="p">]</span><span class="o">+</span><span class="n">output_padding</span><span class="p">[</span><span class="mi">2</span><span class="p">]</span> |
| </pre></div> |
| </div> |
| </li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.Dense"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">Dense</code><span class="sig-paren">(</span><em class="sig-param">units</em>, <em class="sig-param">activation=None</em>, <em class="sig-param">use_bias=True</em>, <em class="sig-param">flatten=True</em>, <em class="sig-param">dtype='float32'</em>, <em class="sig-param">weight_initializer=None</em>, <em class="sig-param">bias_initializer='zeros'</em>, <em class="sig-param">in_units=0</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/basic_layers.html#Dense"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Dense" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.block.HybridBlock</span></code></p> |
| <p>Just your regular densely-connected NN layer.</p> |
| <p><cite>Dense</cite> implements the operation: |
| <cite>output = activation(dot(input, weight) + bias)</cite> |
| where <cite>activation</cite> is the element-wise activation function |
| passed as the <cite>activation</cite> argument, <cite>weight</cite> is a weights matrix |
| created by the layer, and <cite>bias</cite> is a bias vector created by the layer |
| (only applicable if <cite>use_bias</cite> is <cite>True</cite>).</p> |
| <div class="admonition note"> |
| <p class="admonition-title">Note</p> |
| <p>the input must be a tensor with rank 2. Use <cite>flatten</cite> to convert it |
| to rank 2 manually if necessary.</p> |
| </div> |
| <p><strong>Methods</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Dense.hybrid_forward" title="mxnet.gluon.nn.Dense.hybrid_forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybrid_forward</span></code></a>(F, x, weight[, bias])</p></td> |
| <td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>units</strong> (<em>int</em>) – Dimensionality of the output space.</p></li> |
| <li><p><strong>activation</strong> (<em>str</em>) – Activation function to use. See help on <cite>Activation</cite> layer. |
| If you don’t specify anything, no activation is applied |
| (ie. “linear” activation: <cite>a(x) = x</cite>).</p></li> |
| <li><p><strong>use_bias</strong> (<em>bool</em><em>, </em><em>default True</em>) – Whether the layer uses a bias vector.</p></li> |
| <li><p><strong>flatten</strong> (<em>bool</em><em>, </em><em>default True</em>) – Whether the input tensor should be flattened. |
| If true, all but the first axis of input data are collapsed together. |
| If false, all but the last axis of input data are kept the same, and the transformation |
| applies on the last axis.</p></li> |
| <li><p><strong>dtype</strong> (<em>str</em><em> or </em><em>np.dtype</em><em>, </em><em>default 'float32'</em>) – Data type of output embeddings.</p></li> |
| <li><p><strong>weight_initializer</strong> (str or <cite>Initializer</cite>) – Initializer for the <cite>kernel</cite> weights matrix.</p></li> |
| <li><p><strong>bias_initializer</strong> (str or <cite>Initializer</cite>) – Initializer for the bias vector.</p></li> |
| <li><p><strong>in_units</strong> (<em>int</em><em>, </em><em>optional</em>) – Size of the input data. If not specified, initialization will be |
| deferred to the first time <cite>forward</cite> is called and <cite>in_units</cite> |
| will be inferred from the shape of input data.</p></li> |
| <li><p><strong>prefix</strong> (<em>str</em><em> or </em><em>None</em>) – See document of <cite>Block</cite>.</p></li> |
| <li><p><strong>params</strong> (<a class="reference internal" href="../parameter_dict.html#mxnet.gluon.ParameterDict" title="mxnet.gluon.ParameterDict"><em>ParameterDict</em></a><em> or </em><em>None</em>) – See document of <cite>Block</cite>.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="simple"> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>data</strong>: if <cite>flatten</cite> is True, <cite>data</cite> should be a tensor with shape |
| <cite>(batch_size, x1, x2, …, xn)</cite>, where x1 * x2 * … * xn is equal to |
| <cite>in_units</cite>. If <cite>flatten</cite> is False, <cite>data</cite> should have shape |
| <cite>(x1, x2, …, xn, in_units)</cite>.</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul class="simple"> |
| <li><p><strong>out</strong>: if <cite>flatten</cite> is True, <cite>out</cite> will be a tensor with shape |
| <cite>(batch_size, units)</cite>. If <cite>flatten</cite> is False, <cite>out</cite> will have shape |
| <cite>(x1, x2, …, xn, units)</cite>.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.Dense.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">x</em>, <em class="sig-param">weight</em>, <em class="sig-param">bias=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/basic_layers.html#Dense.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Dense.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.Dropout"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">Dropout</code><span class="sig-paren">(</span><em class="sig-param">rate</em>, <em class="sig-param">axes=()</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/basic_layers.html#Dropout"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Dropout" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.block.HybridBlock</span></code></p> |
| <p>Applies Dropout to the input.</p> |
| <p>Dropout consists in randomly setting a fraction <cite>rate</cite> of input units |
| to 0 at each update during training time, which helps prevent overfitting.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>rate</strong> (<em>float</em>) – Fraction of the input units to drop. Must be a number between 0 and 1.</p></li> |
| <li><p><strong>axes</strong> (<em>tuple of int</em><em>, </em><em>default</em><em> (</em><em>)</em>) – The axes on which dropout mask is shared. If empty, regular dropout is applied.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <p><strong>Methods</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Dropout.hybrid_forward" title="mxnet.gluon.nn.Dropout.hybrid_forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybrid_forward</span></code></a>(F, x)</p></td> |
| <td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <dl class="simple"> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>data</strong>: input tensor with arbitrary shape.</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul class="simple"> |
| <li><p><strong>out</strong>: output tensor with the same shape as <cite>data</cite>.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <p class="rubric">References</p> |
| <p><a class="reference external" href="http://www.cs.toronto.edu/~rsalakhu/papers/srivastava14a.pdf">Dropout: A Simple Way to Prevent Neural Networks from Overfitting</a></p> |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.Dropout.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">x</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/basic_layers.html#Dropout.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Dropout.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.ELU"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">ELU</code><span class="sig-paren">(</span><em class="sig-param">alpha=1.0</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/activations.html#ELU"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.ELU" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.block.HybridBlock</span></code></p> |
| <dl class="simple"> |
| <dt>Exponential Linear Unit (ELU)</dt><dd><p>“Fast and Accurate Deep Network Learning by Exponential Linear Units”, Clevert et al, 2016 |
| <a class="reference external" href="https://arxiv.org/abs/1511.07289">https://arxiv.org/abs/1511.07289</a> |
| Published as a conference paper at ICLR 2016</p> |
| </dd> |
| </dl> |
| <p><strong>Methods</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.ELU.hybrid_forward" title="mxnet.gluon.nn.ELU.hybrid_forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybrid_forward</span></code></a>(F, x)</p></td> |
| <td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><p><strong>alpha</strong> (<em>float</em>) – The alpha parameter as described by Clevert et al, 2016</p> |
| </dd> |
| </dl> |
| <dl class="simple"> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>data</strong>: input tensor with arbitrary shape.</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul class="simple"> |
| <li><p><strong>out</strong>: output tensor with the same shape as <cite>data</cite>.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.ELU.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">x</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/activations.html#ELU.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.ELU.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.Embedding"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">Embedding</code><span class="sig-paren">(</span><em class="sig-param">input_dim</em>, <em class="sig-param">output_dim</em>, <em class="sig-param">dtype='float32'</em>, <em class="sig-param">weight_initializer=None</em>, <em class="sig-param">sparse_grad=False</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/basic_layers.html#Embedding"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Embedding" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.block.HybridBlock</span></code></p> |
| <p>Turns non-negative integers (indexes/tokens) into dense vectors |
| of fixed size. eg. [4, 20] -> [[0.25, 0.1], [0.6, -0.2]]</p> |
| <div class="admonition note"> |
| <p class="admonition-title">Note</p> |
| <p>if <cite>sparse_grad</cite> is set to True, the gradient w.r.t weight will be |
| sparse. Only a subset of optimizers support sparse gradients, including SGD, |
| AdaGrad and Adam. By default lazy updates is turned on, which may perform |
| differently from standard updates. For more details, please check the |
| Optimization API at: |
| <a class="reference external" href="https://mxnet.incubator.apache.org/api/python/optimization/optimization.html">https://mxnet.incubator.apache.org/api/python/optimization/optimization.html</a></p> |
| </div> |
| <p><strong>Methods</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Embedding.hybrid_forward" title="mxnet.gluon.nn.Embedding.hybrid_forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybrid_forward</span></code></a>(F, x, weight)</p></td> |
| <td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>input_dim</strong> (<em>int</em>) – Size of the vocabulary, i.e. maximum integer index + 1.</p></li> |
| <li><p><strong>output_dim</strong> (<em>int</em>) – Dimension of the dense embedding.</p></li> |
| <li><p><strong>dtype</strong> (<em>str</em><em> or </em><em>np.dtype</em><em>, </em><em>default 'float32'</em>) – Data type of output embeddings.</p></li> |
| <li><p><strong>weight_initializer</strong> (<a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Initializer for the <cite>embeddings</cite> matrix.</p></li> |
| <li><p><strong>sparse_grad</strong> (<em>bool</em>) – If True, gradient w.r.t. weight will be a ‘row_sparse’ NDArray.</p></li> |
| <li><p><strong>Inputs</strong> – <ul> |
| <li><p><strong>data</strong>: (N-1)-D tensor with shape: <cite>(x1, x2, …, xN-1)</cite>.</p></li> |
| </ul> |
| </p></li> |
| <li><p><strong>Output</strong> – <ul> |
| <li><p><strong>out</strong>: N-D tensor with shape: <cite>(x1, x2, …, xN-1, output_dim)</cite>.</p></li> |
| </ul> |
| </p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.Embedding.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">x</em>, <em class="sig-param">weight</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/basic_layers.html#Embedding.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Embedding.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.Flatten"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">Flatten</code><span class="sig-paren">(</span><em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/basic_layers.html#Flatten"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Flatten" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.block.HybridBlock</span></code></p> |
| <p>Flattens the input to two dimensional.</p> |
| <dl class="simple"> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>data</strong>: input tensor with arbitrary shape <cite>(N, x1, x2, …, xn)</cite></p></li> |
| </ul> |
| </dd> |
| <dt>Output:</dt><dd><ul class="simple"> |
| <li><p><strong>out</strong>: 2D tensor with shape: <cite>(N, x1 cdot x2 cdot … cdot xn)</cite></p></li> |
| </ul> |
| </dd> |
| </dl> |
| <p><strong>Methods</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Flatten.hybrid_forward" title="mxnet.gluon.nn.Flatten.hybrid_forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybrid_forward</span></code></a>(F, x)</p></td> |
| <td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.Flatten.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">x</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/basic_layers.html#Flatten.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Flatten.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.GELU"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">GELU</code><span class="sig-paren">(</span><em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/activations.html#GELU"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.GELU" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.block.HybridBlock</span></code></p> |
| <dl class="simple"> |
| <dt>Gaussian Exponential Linear Unit (GELU)</dt><dd><p>“Gaussian Error Linear Units (GELUs)”, Hendrycks et al, 2016 |
| <a class="reference external" href="https://arxiv.org/abs/1606.08415">https://arxiv.org/abs/1606.08415</a></p> |
| </dd> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>data</strong>: input tensor with arbitrary shape.</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul class="simple"> |
| <li><p><strong>out</strong>: output tensor with the same shape as <cite>data</cite>.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <p><strong>Methods</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.GELU.hybrid_forward" title="mxnet.gluon.nn.GELU.hybrid_forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybrid_forward</span></code></a>(F, x)</p></td> |
| <td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.GELU.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">x</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/activations.html#GELU.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.GELU.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.GlobalAvgPool1D"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">GlobalAvgPool1D</code><span class="sig-paren">(</span><em class="sig-param">layout='NCW'</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/conv_layers.html#GlobalAvgPool1D"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.GlobalAvgPool1D" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.nn.conv_layers._Pooling</span></code></p> |
| <p>Global average pooling operation for temporal data.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><p><strong>layout</strong> (<em>str</em><em>, </em><em>default 'NCW'</em>) – Dimension ordering of data and out (‘NCW’ or ‘NWC’). |
| ‘N’, ‘C’, ‘W’ stands for batch, channel, and width (time) dimensions |
| respectively. padding is applied on ‘W’ dimension.</p> |
| </dd> |
| </dl> |
| <dl class="simple"> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>data</strong>: 3D input tensor with shape <cite>(batch_size, in_channels, width)</cite> |
| when <cite>layout</cite> is <cite>NCW</cite>. For other layouts shape is permuted accordingly.</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul class="simple"> |
| <li><p><strong>out</strong>: 3D output tensor with shape <cite>(batch_size, channels, 1)</cite>.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.GlobalAvgPool2D"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">GlobalAvgPool2D</code><span class="sig-paren">(</span><em class="sig-param">layout='NCHW'</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/conv_layers.html#GlobalAvgPool2D"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.GlobalAvgPool2D" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.nn.conv_layers._Pooling</span></code></p> |
| <p>Global average pooling operation for spatial data.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><p><strong>layout</strong> (<em>str</em><em>, </em><em>default 'NCHW'</em>) – Dimension ordering of data and out (‘NCHW’ or ‘NHWC’). |
| ‘N’, ‘C’, ‘H’, ‘W’ stands for batch, channel, height, and width |
| dimensions respectively.</p> |
| </dd> |
| </dl> |
| <dl class="simple"> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>data</strong>: 4D input tensor with shape |
| <cite>(batch_size, in_channels, height, width)</cite> when <cite>layout</cite> is <cite>NCHW</cite>. |
| For other layouts shape is permuted accordingly.</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul class="simple"> |
| <li><p><strong>out</strong>: 4D output tensor with shape |
| <cite>(batch_size, channels, 1, 1)</cite> when <cite>layout</cite> is <cite>NCHW</cite>.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.GlobalAvgPool3D"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">GlobalAvgPool3D</code><span class="sig-paren">(</span><em class="sig-param">layout='NCDHW'</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/conv_layers.html#GlobalAvgPool3D"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.GlobalAvgPool3D" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.nn.conv_layers._Pooling</span></code></p> |
| <p>Global average pooling operation for 3D data (spatial or spatio-temporal).</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><p><strong>layout</strong> (<em>str</em><em>, </em><em>default 'NCDHW'</em>) – Dimension ordering of data and out (‘NCDHW’ or ‘NDHWC’). |
| ‘N’, ‘C’, ‘H’, ‘W’, ‘D’ stands for batch, channel, height, width and |
| depth dimensions respectively. padding is applied on ‘D’, ‘H’ and ‘W’ |
| dimension.</p> |
| </dd> |
| </dl> |
| <dl class="simple"> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>data</strong>: 5D input tensor with shape |
| <cite>(batch_size, in_channels, depth, height, width)</cite> when <cite>layout</cite> is <cite>NCDHW</cite>. |
| For other layouts shape is permuted accordingly.</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul class="simple"> |
| <li><p><strong>out</strong>: 5D output tensor with shape |
| <cite>(batch_size, channels, 1, 1, 1)</cite> when <cite>layout</cite> is <cite>NCDHW</cite>.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.GlobalMaxPool1D"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">GlobalMaxPool1D</code><span class="sig-paren">(</span><em class="sig-param">layout='NCW'</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/conv_layers.html#GlobalMaxPool1D"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.GlobalMaxPool1D" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.nn.conv_layers._Pooling</span></code></p> |
| <p>Gloabl max pooling operation for one dimensional (temporal) data.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><p><strong>layout</strong> (<em>str</em><em>, </em><em>default 'NCW'</em>) – Dimension ordering of data and out (‘NCW’ or ‘NWC’). |
| ‘N’, ‘C’, ‘W’ stands for batch, channel, and width (time) dimensions |
| respectively. Pooling is applied on the W dimension.</p> |
| </dd> |
| </dl> |
| <dl class="simple"> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>data</strong>: 3D input tensor with shape <cite>(batch_size, in_channels, width)</cite> |
| when <cite>layout</cite> is <cite>NCW</cite>. For other layouts shape is permuted accordingly.</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul class="simple"> |
| <li><p><strong>out</strong>: 3D output tensor with shape <cite>(batch_size, channels, 1)</cite> |
| when <cite>layout</cite> is <cite>NCW</cite>.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.GlobalMaxPool2D"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">GlobalMaxPool2D</code><span class="sig-paren">(</span><em class="sig-param">layout='NCHW'</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/conv_layers.html#GlobalMaxPool2D"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.GlobalMaxPool2D" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.nn.conv_layers._Pooling</span></code></p> |
| <p>Global max pooling operation for two dimensional (spatial) data.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><p><strong>layout</strong> (<em>str</em><em>, </em><em>default 'NCHW'</em>) – Dimension ordering of data and out (‘NCHW’ or ‘NHWC’). |
| ‘N’, ‘C’, ‘H’, ‘W’ stands for batch, channel, height, and width |
| dimensions respectively. padding is applied on ‘H’ and ‘W’ dimension.</p> |
| </dd> |
| </dl> |
| <dl class="simple"> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>data</strong>: 4D input tensor with shape |
| <cite>(batch_size, in_channels, height, width)</cite> when <cite>layout</cite> is <cite>NCHW</cite>. |
| For other layouts shape is permuted accordingly.</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul class="simple"> |
| <li><p><strong>out</strong>: 4D output tensor with shape |
| <cite>(batch_size, channels, 1, 1)</cite> when <cite>layout</cite> is <cite>NCHW</cite>.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.GlobalMaxPool3D"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">GlobalMaxPool3D</code><span class="sig-paren">(</span><em class="sig-param">layout='NCDHW'</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/conv_layers.html#GlobalMaxPool3D"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.GlobalMaxPool3D" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.nn.conv_layers._Pooling</span></code></p> |
| <p>Global max pooling operation for 3D data (spatial or spatio-temporal).</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><p><strong>layout</strong> (<em>str</em><em>, </em><em>default 'NCDHW'</em>) – Dimension ordering of data and out (‘NCDHW’ or ‘NDHWC’). |
| ‘N’, ‘C’, ‘H’, ‘W’, ‘D’ stands for batch, channel, height, width and |
| depth dimensions respectively. padding is applied on ‘D’, ‘H’ and ‘W’ |
| dimension.</p> |
| </dd> |
| </dl> |
| <dl class="simple"> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>data</strong>: 5D input tensor with shape |
| <cite>(batch_size, in_channels, depth, height, width)</cite> when <cite>layout</cite> is <cite>NCW</cite>. |
| For other layouts shape is permuted accordingly.</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul class="simple"> |
| <li><p><strong>out</strong>: 5D output tensor with shape |
| <cite>(batch_size, channels, 1, 1, 1)</cite> when <cite>layout</cite> is <cite>NCDHW</cite>.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.GroupNorm"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">GroupNorm</code><span class="sig-paren">(</span><em class="sig-param">num_groups=1</em>, <em class="sig-param">epsilon=1e-05</em>, <em class="sig-param">center=True</em>, <em class="sig-param">scale=True</em>, <em class="sig-param">beta_initializer='zeros'</em>, <em class="sig-param">gamma_initializer='ones'</em>, <em class="sig-param">prefix=None</em>, <em class="sig-param">params=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/basic_layers.html#GroupNorm"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.GroupNorm" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.block.HybridBlock</span></code></p> |
| <p>Applies group normalization to the n-dimensional input array. |
| This operator takes an n-dimensional input array where the leftmost 2 axis are |
| <cite>batch</cite> and <cite>channel</cite> respectively:</p> |
| <div class="math notranslate nohighlight"> |
| \[x = x.reshape((N, num_groups, C // num_groups, ...)) |
| axis = (2, ...) |
| out = \frac{x - mean[x, axis]}{ \sqrt{Var[x, axis] + \epsilon}} * gamma + beta\]</div> |
| <p><strong>Methods</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.GroupNorm.hybrid_forward" title="mxnet.gluon.nn.GroupNorm.hybrid_forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybrid_forward</span></code></a>(F, data, gamma, beta)</p></td> |
| <td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>num_groups</strong> (<em>int</em><em>, </em><em>default 1</em>) – Number of groups to separate the channel axis into.</p></li> |
| <li><p><strong>epsilon</strong> (<em>float</em><em>, </em><em>default 1e-5</em>) – Small float added to variance to avoid dividing by zero.</p></li> |
| <li><p><strong>center</strong> (<em>bool</em><em>, </em><em>default True</em>) – If True, add offset of <cite>beta</cite> to normalized tensor. |
| If False, <cite>beta</cite> is ignored.</p></li> |
| <li><p><strong>scale</strong> (<em>bool</em><em>, </em><em>default True</em>) – If True, multiply by <cite>gamma</cite>. If False, <cite>gamma</cite> is not used.</p></li> |
| <li><p><strong>beta_initializer</strong> (str or <cite>Initializer</cite>, default ‘zeros’) – Initializer for the beta weight.</p></li> |
| <li><p><strong>gamma_initializer</strong> (str or <cite>Initializer</cite>, default ‘ones’) – Initializer for the gamma weight.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="simple"> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>data</strong>: input tensor with shape (N, C, …).</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul class="simple"> |
| <li><p><strong>out</strong>: output tensor with the same shape as <cite>data</cite>.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <p class="rubric">References</p> |
| <p><a class="reference external" href="https://arxiv.org/pdf/1803.08494.pdf">Group Normalization</a></p> |
| <p class="rubric">Examples</p> |
| <div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="c1"># Input of shape (2, 3, 4)</span> |
| <span class="gp">>>> </span><span class="n">x</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">array</span><span class="p">([[[</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">],</span> |
| <span class="go"> [ 4, 5, 6, 7],</span> |
| <span class="go"> [ 8, 9, 10, 11]],</span> |
| <span class="go"> [[12, 13, 14, 15],</span> |
| <span class="go"> [16, 17, 18, 19],</span> |
| <span class="go"> [20, 21, 22, 23]]])</span> |
| <span class="gp">>>> </span><span class="c1"># Group normalization is calculated with the above formula</span> |
| <span class="gp">>>> </span><span class="n">layer</span> <span class="o">=</span> <span class="n">GroupNorm</span><span class="p">()</span> |
| <span class="gp">>>> </span><span class="n">layer</span><span class="o">.</span><span class="n">initialize</span><span class="p">(</span><span class="n">ctx</span><span class="o">=</span><span class="n">mx</span><span class="o">.</span><span class="n">cpu</span><span class="p">(</span><span class="mi">0</span><span class="p">))</span> |
| <span class="gp">>>> </span><span class="n">layer</span><span class="p">(</span><span class="n">x</span><span class="p">)</span> |
| <span class="go">[[[-1.5932543 -1.3035717 -1.0138891 -0.7242065]</span> |
| <span class="go"> [-0.4345239 -0.1448413 0.1448413 0.4345239]</span> |
| <span class="go"> [ 0.7242065 1.0138891 1.3035717 1.5932543]]</span> |
| <span class="go"> [[-1.5932543 -1.3035717 -1.0138891 -0.7242065]</span> |
| <span class="go"> [-0.4345239 -0.1448413 0.1448413 0.4345239]</span> |
| <span class="go"> [ 0.7242065 1.0138891 1.3035717 1.5932543]]]</span> |
| <span class="go"><NDArray 2x3x4 @cpu(0)></span> |
| </pre></div> |
| </div> |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.GroupNorm.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">data</em>, <em class="sig-param">gamma</em>, <em class="sig-param">beta</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/basic_layers.html#GroupNorm.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.GroupNorm.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.HybridBlock"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">HybridBlock</code><span class="sig-paren">(</span><em class="sig-param">prefix=None</em>, <em class="sig-param">params=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/block.html#HybridBlock"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.HybridBlock" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.block.Block</span></code></p> |
| <p><cite>HybridBlock</cite> supports forwarding with both Symbol and NDArray.</p> |
| <p><cite>HybridBlock</cite> is similar to <cite>Block</cite>, with a few differences:</p> |
| <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">mxnet</span> <span class="k">as</span> <span class="nn">mx</span> |
| <span class="kn">from</span> <span class="nn">mxnet.gluon</span> <span class="kn">import</span> <span class="n">HybridBlock</span><span class="p">,</span> <span class="n">nn</span> |
| |
| <span class="k">class</span> <span class="nc">Model</span><span class="p">(</span><span class="n">HybridBlock</span><span class="p">):</span> |
| <span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span> |
| <span class="nb">super</span><span class="p">(</span><span class="n">Model</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="fm">__init__</span><span class="p">(</span><span class="o">**</span><span class="n">kwargs</span><span class="p">)</span> |
| <span class="c1"># use name_scope to give child Blocks appropriate names.</span> |
| <span class="k">with</span> <span class="bp">self</span><span class="o">.</span><span class="n">name_scope</span><span class="p">():</span> |
| <span class="bp">self</span><span class="o">.</span><span class="n">dense0</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">)</span> |
| <span class="bp">self</span><span class="o">.</span><span class="n">dense1</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">)</span> |
| |
| <span class="k">def</span> <span class="nf">hybrid_forward</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">F</span><span class="p">,</span> <span class="n">x</span><span class="p">):</span> |
| <span class="n">x</span> <span class="o">=</span> <span class="n">F</span><span class="o">.</span><span class="n">relu</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">dense0</span><span class="p">(</span><span class="n">x</span><span class="p">))</span> |
| <span class="k">return</span> <span class="n">F</span><span class="o">.</span><span class="n">relu</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">dense1</span><span class="p">(</span><span class="n">x</span><span class="p">))</span> |
| |
| <span class="n">model</span> <span class="o">=</span> <span class="n">Model</span><span class="p">()</span> |
| <span class="n">model</span><span class="o">.</span><span class="n">initialize</span><span class="p">(</span><span class="n">ctx</span><span class="o">=</span><span class="n">mx</span><span class="o">.</span><span class="n">cpu</span><span class="p">(</span><span class="mi">0</span><span class="p">))</span> |
| <span class="n">model</span><span class="o">.</span><span class="n">hybridize</span><span class="p">()</span> |
| <span class="n">model</span><span class="p">(</span><span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">zeros</span><span class="p">((</span><span class="mi">10</span><span class="p">,</span> <span class="mi">10</span><span class="p">),</span> <span class="n">ctx</span><span class="o">=</span><span class="n">mx</span><span class="o">.</span><span class="n">cpu</span><span class="p">(</span><span class="mi">0</span><span class="p">)))</span> |
| </pre></div> |
| </div> |
| <p><strong>Methods</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.HybridBlock.cast" title="mxnet.gluon.nn.HybridBlock.cast"><code class="xref py py-obj docutils literal notranslate"><span class="pre">cast</span></code></a>(dtype)</p></td> |
| <td><p>Cast this Block to use another data type.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.HybridBlock.export" title="mxnet.gluon.nn.HybridBlock.export"><code class="xref py py-obj docutils literal notranslate"><span class="pre">export</span></code></a>(path[, epoch, remove_amp_cast])</p></td> |
| <td><p>Export HybridBlock to json format that can be loaded by <cite>gluon.SymbolBlock.imports</cite>, <cite>mxnet.mod.Module</cite> or the C++ interface.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.HybridBlock.forward" title="mxnet.gluon.nn.HybridBlock.forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">forward</span></code></a>(x, *args)</p></td> |
| <td><p>Defines the forward computation.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.HybridBlock.hybrid_forward" title="mxnet.gluon.nn.HybridBlock.hybrid_forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybrid_forward</span></code></a>(F, x, *args, **kwargs)</p></td> |
| <td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.HybridBlock.hybridize" title="mxnet.gluon.nn.HybridBlock.hybridize"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybridize</span></code></a>([active, backend, clear, …])</p></td> |
| <td><p>Activates or deactivates <a class="reference internal" href="#mxnet.gluon.nn.HybridBlock" title="mxnet.gluon.nn.HybridBlock"><code class="xref py py-class docutils literal notranslate"><span class="pre">HybridBlock</span></code></a> s recursively.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.HybridBlock.infer_shape" title="mxnet.gluon.nn.HybridBlock.infer_shape"><code class="xref py py-obj docutils literal notranslate"><span class="pre">infer_shape</span></code></a>(*args)</p></td> |
| <td><p>Infers shape of Parameters from inputs.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.HybridBlock.infer_type" title="mxnet.gluon.nn.HybridBlock.infer_type"><code class="xref py py-obj docutils literal notranslate"><span class="pre">infer_type</span></code></a>(*args)</p></td> |
| <td><p>Infers data type of Parameters from inputs.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.HybridBlock.optimize_for" title="mxnet.gluon.nn.HybridBlock.optimize_for"><code class="xref py py-obj docutils literal notranslate"><span class="pre">optimize_for</span></code></a>(x, *args[, backend, clear, …])</p></td> |
| <td><p>Partitions the current HybridBlock and optimizes it for a given backend without executing a forward pass.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.HybridBlock.register_child" title="mxnet.gluon.nn.HybridBlock.register_child"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_child</span></code></a>(block[, name])</p></td> |
| <td><p>Registers block as a child of self.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.HybridBlock.register_op_hook" title="mxnet.gluon.nn.HybridBlock.register_op_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_op_hook</span></code></a>(callback[, monitor_all])</p></td> |
| <td><p>Install op hook for block recursively.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <p>Forward computation in <a class="reference internal" href="#mxnet.gluon.nn.HybridBlock" title="mxnet.gluon.nn.HybridBlock"><code class="xref py py-class docutils literal notranslate"><span class="pre">HybridBlock</span></code></a> must be static to work with <code class="xref py py-class docutils literal notranslate"><span class="pre">Symbol</span></code> s, |
| i.e. you cannot call <code class="xref py py-meth docutils literal notranslate"><span class="pre">NDArray.asnumpy()</span></code>, <code class="xref py py-attr docutils literal notranslate"><span class="pre">NDArray.shape</span></code>, |
| <code class="xref py py-attr docutils literal notranslate"><span class="pre">NDArray.dtype</span></code>, <cite>NDArray</cite> indexing (<cite>x[i]</cite>) etc on tensors. |
| Also, you cannot use branching or loop logic that bases on non-constant |
| expressions like random numbers or intermediate results, since they change |
| the graph structure for each iteration.</p> |
| <p>Before activating with <a class="reference internal" href="#mxnet.gluon.nn.HybridBlock.hybridize" title="mxnet.gluon.nn.HybridBlock.hybridize"><code class="xref py py-meth docutils literal notranslate"><span class="pre">hybridize()</span></code></a>, <a class="reference internal" href="#mxnet.gluon.nn.HybridBlock" title="mxnet.gluon.nn.HybridBlock"><code class="xref py py-class docutils literal notranslate"><span class="pre">HybridBlock</span></code></a> works just like normal |
| <a class="reference internal" href="#mxnet.gluon.nn.Block" title="mxnet.gluon.nn.Block"><code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code></a>. After activation, <a class="reference internal" href="#mxnet.gluon.nn.HybridBlock" title="mxnet.gluon.nn.HybridBlock"><code class="xref py py-class docutils literal notranslate"><span class="pre">HybridBlock</span></code></a> will create a symbolic graph |
| representing the forward computation and cache it. On subsequent forwards, |
| the cached graph will be used instead of <a class="reference internal" href="#mxnet.gluon.nn.HybridBlock.hybrid_forward" title="mxnet.gluon.nn.HybridBlock.hybrid_forward"><code class="xref py py-meth docutils literal notranslate"><span class="pre">hybrid_forward()</span></code></a>.</p> |
| <p>Please see references for detailed tutorial.</p> |
| <p class="rubric">References</p> |
| <p><a class="reference external" href="https://mxnet.io/tutorials/gluon/hybrid.html">Hybrid - Faster training and easy deployment</a></p> |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.HybridBlock.cast"> |
| <code class="sig-name descname">cast</code><span class="sig-paren">(</span><em class="sig-param">dtype</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/block.html#HybridBlock.cast"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.HybridBlock.cast" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Cast this Block to use another data type.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><p><strong>dtype</strong> (<em>str</em><em> or </em><em>numpy.dtype</em>) – The new data type.</p> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.HybridBlock.export"> |
| <code class="sig-name descname">export</code><span class="sig-paren">(</span><em class="sig-param">path</em>, <em class="sig-param">epoch=0</em>, <em class="sig-param">remove_amp_cast=True</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/block.html#HybridBlock.export"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.HybridBlock.export" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Export HybridBlock to json format that can be loaded by |
| <cite>gluon.SymbolBlock.imports</cite>, <cite>mxnet.mod.Module</cite> or the C++ interface.</p> |
| <div class="admonition note"> |
| <p class="admonition-title">Note</p> |
| <p>When there are only one input, it will have name <cite>data</cite>. When there |
| Are more than one inputs, they will be named as <cite>data0</cite>, <cite>data1</cite>, etc.</p> |
| </div> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>path</strong> (<em>str</em>) – Path to save model. Two files <cite>path-symbol.json</cite> and <cite>path-xxxx.params</cite> |
| will be created, where xxxx is the 4 digits epoch number.</p></li> |
| <li><p><strong>epoch</strong> (<em>int</em>) – Epoch number of saved model.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.HybridBlock.forward"> |
| <code class="sig-name descname">forward</code><span class="sig-paren">(</span><em class="sig-param">x</em>, <em class="sig-param">*args</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/block.html#HybridBlock.forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.HybridBlock.forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Defines the forward computation. Arguments can be either |
| <code class="xref py py-class docutils literal notranslate"><span class="pre">NDArray</span></code> or <code class="xref py py-class docutils literal notranslate"><span class="pre">Symbol</span></code>.</p> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.HybridBlock.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">x</em>, <em class="sig-param">*args</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/block.html#HybridBlock.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.HybridBlock.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.HybridBlock.hybridize"> |
| <code class="sig-name descname">hybridize</code><span class="sig-paren">(</span><em class="sig-param">active=True</em>, <em class="sig-param">backend=None</em>, <em class="sig-param">clear=True</em>, <em class="sig-param">static_alloc=False</em>, <em class="sig-param">static_shape=False</em>, <em class="sig-param">inline_limit=2</em>, <em class="sig-param">forward_bulk_size=None</em>, <em class="sig-param">backward_bulk_size=None</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/block.html#HybridBlock.hybridize"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.HybridBlock.hybridize" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Activates or deactivates <a class="reference internal" href="#mxnet.gluon.nn.HybridBlock" title="mxnet.gluon.nn.HybridBlock"><code class="xref py py-class docutils literal notranslate"><span class="pre">HybridBlock</span></code></a> s recursively. Has no effect on |
| non-hybrid children.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>active</strong> (<em>bool</em><em>, </em><em>default True</em>) – Whether to turn hybrid on or off.</p></li> |
| <li><p><strong>backend</strong> (<em>str</em>) – The name of backend, as registered in <cite>SubgraphBackendRegistry</cite>, default None</p></li> |
| <li><p><strong>clear</strong> (<em>bool</em><em>, </em><em>default True</em>) – Clears any previous optimizations</p></li> |
| <li><p><strong>static_alloc</strong> (<em>optional bool</em><em>, </em><em>default False</em>) – Statically allocate memory to improve speed. Memory usage may increase.</p></li> |
| <li><p><strong>static_shape</strong> (<em>optional bool</em><em>, </em><em>default False</em>) – Optimize for invariant input shapes between iterations. Must also |
| set static_alloc to True. Change of input shapes is still allowed |
| but slower.</p></li> |
| <li><p><strong>inline_limit</strong> (<em>optional int</em><em>, </em><em>default 2</em>) – Maximum number of operators that can be inlined.</p></li> |
| <li><p><strong>forward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during forward pass.</p></li> |
| <li><p><strong>backward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during forward pass.</p></li> |
| <li><p><strong>**kwargs</strong> (<em>optional</em>) – Backend options.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.HybridBlock.infer_shape"> |
| <code class="sig-name descname">infer_shape</code><span class="sig-paren">(</span><em class="sig-param">*args</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/block.html#HybridBlock.infer_shape"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.HybridBlock.infer_shape" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Infers shape of Parameters from inputs.</p> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.HybridBlock.infer_type"> |
| <code class="sig-name descname">infer_type</code><span class="sig-paren">(</span><em class="sig-param">*args</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/block.html#HybridBlock.infer_type"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.HybridBlock.infer_type" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Infers data type of Parameters from inputs.</p> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.HybridBlock.optimize_for"> |
| <code class="sig-name descname">optimize_for</code><span class="sig-paren">(</span><em class="sig-param">x</em>, <em class="sig-param">*args</em>, <em class="sig-param">backend=None</em>, <em class="sig-param">clear=False</em>, <em class="sig-param">static_alloc=False</em>, <em class="sig-param">static_shape=False</em>, <em class="sig-param">inline_limit=2</em>, <em class="sig-param">forward_bulk_size=None</em>, <em class="sig-param">backward_bulk_size=None</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/block.html#HybridBlock.optimize_for"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.HybridBlock.optimize_for" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Partitions the current HybridBlock and optimizes it for a given backend |
| without executing a forward pass. Modifies the HybridBlock in-place.</p> |
| <p>Immediately partitions a HybridBlock using the specified backend. Combines |
| the work done in the hybridize API with part of the work done in the forward |
| pass without calling the CachedOp. Can be used in place of hybridize, |
| afterwards <cite>export</cite> can be called or inference can be run. See README.md in |
| example/extensions/lib_subgraph/README.md for more details.</p> |
| <p class="rubric">Examples</p> |
| <p># partition and then export to file |
| block.optimize_for(x, backend=’myPart’) |
| block.export(‘partitioned’)</p> |
| <p># partition and then run inference |
| block.optimize_for(x, backend=’myPart’) |
| block(x)</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – first input to model</p></li> |
| <li><p><strong>*args</strong> (<a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – other inputs to model</p></li> |
| <li><p><strong>backend</strong> (<em>str</em>) – The name of backend, as registered in <cite>SubgraphBackendRegistry</cite>, default None</p></li> |
| <li><p><strong>clear</strong> (<em>bool</em><em>, </em><em>default False</em>) – Clears any previous optimizations</p></li> |
| <li><p><strong>static_alloc</strong> (<em>bool</em><em>, </em><em>default False</em>) – Statically allocate memory to improve speed. Memory usage may increase.</p></li> |
| <li><p><strong>static_shape</strong> (<em>bool</em><em>, </em><em>default False</em>) – Optimize for invariant input shapes between iterations. Must also |
| set static_alloc to True. Change of input shapes is still allowed |
| but slower.</p></li> |
| <li><p><strong>inline_limit</strong> (<em>optional int</em><em>, </em><em>default 2</em>) – Maximum number of operators that can be inlined.</p></li> |
| <li><p><strong>forward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during forward pass.</p></li> |
| <li><p><strong>backward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during forward pass.</p></li> |
| <li><p><strong>**kwargs</strong> (<em>The backend options</em><em>, </em><em>optional</em>) – Passed on to <cite>PrePartition</cite> and <cite>PostPartition</cite> functions of <cite>SubgraphProperty</cite></p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.HybridBlock.register_child"> |
| <code class="sig-name descname">register_child</code><span class="sig-paren">(</span><em class="sig-param">block</em>, <em class="sig-param">name=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/block.html#HybridBlock.register_child"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.HybridBlock.register_child" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Registers block as a child of self. <a class="reference internal" href="#mxnet.gluon.nn.Block" title="mxnet.gluon.nn.Block"><code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code></a> s assigned to self as |
| attributes will be registered automatically.</p> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.HybridBlock.register_op_hook"> |
| <code class="sig-name descname">register_op_hook</code><span class="sig-paren">(</span><em class="sig-param">callback</em>, <em class="sig-param">monitor_all=False</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/block.html#HybridBlock.register_op_hook"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.HybridBlock.register_op_hook" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Install op hook for block recursively.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>callback</strong> (<em>function</em>) – Takes a string and a NDArrayHandle.</p></li> |
| <li><p><strong>monitor_all</strong> (<em>bool</em><em>, </em><em>default False</em>) – If true, monitor both input and output, otherwise monitor output only.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.HybridLambda"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">HybridLambda</code><span class="sig-paren">(</span><em class="sig-param">function</em>, <em class="sig-param">prefix=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/basic_layers.html#HybridLambda"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.HybridLambda" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.block.HybridBlock</span></code></p> |
| <p>Wraps an operator or an expression as a HybridBlock object.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>function</strong> (<em>str</em><em> or </em><em>function</em>) – <p>Function used in lambda must be one of the following: |
| 1) The name of an operator that is available in both symbol and ndarray. For example:</p> |
| <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">block</span> <span class="o">=</span> <span class="n">HybridLambda</span><span class="p">(</span><span class="s1">'tanh'</span><span class="p">)</span> |
| </pre></div> |
| </div> |
| <ol class="arabic" start="2"> |
| <li><p>A function that conforms to <code class="docutils literal notranslate"><span class="pre">def</span> <span class="pre">function(F,</span> <span class="pre">data,</span> <span class="pre">*args)</span></code>. For example:</p> |
| <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">block</span> <span class="o">=</span> <span class="n">HybridLambda</span><span class="p">(</span><span class="k">lambda</span> <span class="n">F</span><span class="p">,</span> <span class="n">x</span><span class="p">:</span> <span class="n">F</span><span class="o">.</span><span class="n">LeakyReLU</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">slope</span><span class="o">=</span><span class="mf">0.1</span><span class="p">))</span> |
| </pre></div> |
| </div> |
| </li> |
| </ol> |
| </p></li> |
| <li><p><strong>Inputs</strong> – <ul> |
| <li><dl class="simple"> |
| <dt>** <em>args *</em>: one or more input data. First argument must be symbol or ndarray. Their </dt><dd><p>shapes depend on the function.</p> |
| </dd> |
| </dl> |
| </li> |
| </ul> |
| </p></li> |
| <li><p><strong>Output</strong> – <ul> |
| <li><p>** <em>outputs *</em>: one or more output data. Their shapes depend on the function.</p></li> |
| </ul> |
| </p></li> |
| </ul> |
| </dd> |
| </dl> |
| <p><strong>Methods</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.HybridLambda.hybrid_forward" title="mxnet.gluon.nn.HybridLambda.hybrid_forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybrid_forward</span></code></a>(F, x, *args)</p></td> |
| <td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.HybridLambda.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">x</em>, <em class="sig-param">*args</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/basic_layers.html#HybridLambda.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.HybridLambda.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.HybridSequential"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">HybridSequential</code><span class="sig-paren">(</span><em class="sig-param">prefix=None</em>, <em class="sig-param">params=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/basic_layers.html#HybridSequential"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.HybridSequential" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.block.HybridBlock</span></code></p> |
| <p>Stacks HybridBlocks sequentially.</p> |
| <p>Example:</p> |
| <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">net</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">HybridSequential</span><span class="p">()</span> |
| <span class="c1"># use net's name_scope to give child Blocks appropriate names.</span> |
| <span class="k">with</span> <span class="n">net</span><span class="o">.</span><span class="n">name_scope</span><span class="p">():</span> |
| <span class="n">net</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">10</span><span class="p">,</span> <span class="n">activation</span><span class="o">=</span><span class="s1">'relu'</span><span class="p">))</span> |
| <span class="n">net</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">))</span> |
| <span class="n">net</span><span class="o">.</span><span class="n">hybridize</span><span class="p">()</span> |
| </pre></div> |
| </div> |
| <p><strong>Methods</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.HybridSequential.add" title="mxnet.gluon.nn.HybridSequential.add"><code class="xref py py-obj docutils literal notranslate"><span class="pre">add</span></code></a>(*blocks)</p></td> |
| <td><p>Adds block on top of the stack.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.HybridSequential.hybrid_forward" title="mxnet.gluon.nn.HybridSequential.hybrid_forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybrid_forward</span></code></a>(F, x)</p></td> |
| <td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.HybridSequential.add"> |
| <code class="sig-name descname">add</code><span class="sig-paren">(</span><em class="sig-param">*blocks</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/basic_layers.html#HybridSequential.add"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.HybridSequential.add" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Adds block on top of the stack.</p> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.HybridSequential.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">x</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/basic_layers.html#HybridSequential.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.HybridSequential.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.InstanceNorm"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">InstanceNorm</code><span class="sig-paren">(</span><em class="sig-param">axis=1</em>, <em class="sig-param">epsilon=1e-05</em>, <em class="sig-param">center=True</em>, <em class="sig-param">scale=False</em>, <em class="sig-param">beta_initializer='zeros'</em>, <em class="sig-param">gamma_initializer='ones'</em>, <em class="sig-param">in_channels=0</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/basic_layers.html#InstanceNorm"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.InstanceNorm" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.block.HybridBlock</span></code></p> |
| <p>Applies instance normalization to the n-dimensional input array. |
| This operator takes an n-dimensional input array where (n>2) and normalizes |
| the input using the following formula:</p> |
| <div class="math notranslate nohighlight"> |
| \[ \begin{align}\begin{aligned}\bar{C} = \{i \mid i \neq 0, i \neq axis\}\\out = \frac{x - mean[data, \bar{C}]}{ \sqrt{Var[data, \bar{C}]} + \epsilon} |
| * gamma + beta\end{aligned}\end{align} \]</div> |
| <p><strong>Methods</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.InstanceNorm.hybrid_forward" title="mxnet.gluon.nn.InstanceNorm.hybrid_forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybrid_forward</span></code></a>(F, x, gamma, beta)</p></td> |
| <td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>axis</strong> (<em>int</em><em>, </em><em>default 1</em>) – The axis that will be excluded in the normalization process. This is typically the channels |
| (C) axis. For instance, after a <cite>Conv2D</cite> layer with <cite>layout=’NCHW’</cite>, |
| set <cite>axis=1</cite> in <cite>InstanceNorm</cite>. If <cite>layout=’NHWC’</cite>, then set <cite>axis=3</cite>. Data will be |
| normalized along axes excluding the first axis and the axis given.</p></li> |
| <li><p><strong>epsilon</strong> (<em>float</em><em>, </em><em>default 1e-5</em>) – Small float added to variance to avoid dividing by zero.</p></li> |
| <li><p><strong>center</strong> (<em>bool</em><em>, </em><em>default True</em>) – If True, add offset of <cite>beta</cite> to normalized tensor. |
| If False, <cite>beta</cite> is ignored.</p></li> |
| <li><p><strong>scale</strong> (<em>bool</em><em>, </em><em>default True</em>) – If True, multiply by <cite>gamma</cite>. If False, <cite>gamma</cite> is not used. |
| When the next layer is linear (also e.g. <cite>nn.relu</cite>), |
| this can be disabled since the scaling |
| will be done by the next layer.</p></li> |
| <li><p><strong>beta_initializer</strong> (str or <cite>Initializer</cite>, default ‘zeros’) – Initializer for the beta weight.</p></li> |
| <li><p><strong>gamma_initializer</strong> (str or <cite>Initializer</cite>, default ‘ones’) – Initializer for the gamma weight.</p></li> |
| <li><p><strong>in_channels</strong> (<em>int</em><em>, </em><em>default 0</em>) – Number of channels (feature maps) in input data. If not specified, |
| initialization will be deferred to the first time <cite>forward</cite> is called |
| and <cite>in_channels</cite> will be inferred from the shape of input data.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="simple"> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>data</strong>: input tensor with arbitrary shape.</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul class="simple"> |
| <li><p><strong>out</strong>: output tensor with the same shape as <cite>data</cite>.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <p class="rubric">References</p> |
| <p><a class="reference external" href="https://arxiv.org/abs/1607.08022">Instance Normalization: The Missing Ingredient for Fast Stylization</a></p> |
| <p class="rubric">Examples</p> |
| <div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="c1"># Input of shape (2,1,2)</span> |
| <span class="gp">>>> </span><span class="n">x</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">array</span><span class="p">([[[</span> <span class="mf">1.1</span><span class="p">,</span> <span class="mf">2.2</span><span class="p">]],</span> |
| <span class="gp">... </span> <span class="p">[[</span> <span class="mf">3.3</span><span class="p">,</span> <span class="mf">4.4</span><span class="p">]]])</span> |
| <span class="gp">>>> </span><span class="c1"># Instance normalization is calculated with the above formula</span> |
| <span class="gp">>>> </span><span class="n">layer</span> <span class="o">=</span> <span class="n">InstanceNorm</span><span class="p">()</span> |
| <span class="gp">>>> </span><span class="n">layer</span><span class="o">.</span><span class="n">initialize</span><span class="p">(</span><span class="n">ctx</span><span class="o">=</span><span class="n">mx</span><span class="o">.</span><span class="n">cpu</span><span class="p">(</span><span class="mi">0</span><span class="p">))</span> |
| <span class="gp">>>> </span><span class="n">layer</span><span class="p">(</span><span class="n">x</span><span class="p">)</span> |
| <span class="go">[[[-0.99998355 0.99998331]]</span> |
| <span class="go"> [[-0.99998319 0.99998361]]]</span> |
| <span class="go"><NDArray 2x1x2 @cpu(0)></span> |
| </pre></div> |
| </div> |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.InstanceNorm.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">x</em>, <em class="sig-param">gamma</em>, <em class="sig-param">beta</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/basic_layers.html#InstanceNorm.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.InstanceNorm.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.Lambda"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">Lambda</code><span class="sig-paren">(</span><em class="sig-param">function</em>, <em class="sig-param">prefix=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/basic_layers.html#Lambda"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Lambda" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.block.Block</span></code></p> |
| <p>Wraps an operator or an expression as a Block object.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>function</strong> (<em>str</em><em> or </em><em>function</em>) – <p>Function used in lambda must be one of the following: |
| 1) the name of an operator that is available in ndarray. For example:</p> |
| <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">block</span> <span class="o">=</span> <span class="n">Lambda</span><span class="p">(</span><span class="s1">'tanh'</span><span class="p">)</span> |
| </pre></div> |
| </div> |
| <ol class="arabic" start="2"> |
| <li><p>a function that conforms to <code class="docutils literal notranslate"><span class="pre">def</span> <span class="pre">function(*args)</span></code>. For example:</p> |
| <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">block</span> <span class="o">=</span> <span class="n">Lambda</span><span class="p">(</span><span class="k">lambda</span> <span class="n">x</span><span class="p">:</span> <span class="n">nd</span><span class="o">.</span><span class="n">LeakyReLU</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">slope</span><span class="o">=</span><span class="mf">0.1</span><span class="p">))</span> |
| </pre></div> |
| </div> |
| </li> |
| </ol> |
| </p></li> |
| <li><p><strong>Inputs</strong> – <ul> |
| <li><p>** <em>args *</em>: one or more input data. Their shapes depend on the function.</p></li> |
| </ul> |
| </p></li> |
| <li><p><strong>Output</strong> – <ul> |
| <li><p>** <em>outputs *</em>: one or more output data. Their shapes depend on the function.</p></li> |
| </ul> |
| </p></li> |
| </ul> |
| </dd> |
| </dl> |
| <p><strong>Methods</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Lambda.forward" title="mxnet.gluon.nn.Lambda.forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">forward</span></code></a>(*args)</p></td> |
| <td><p>Overrides to implement forward computation using <code class="xref py py-class docutils literal notranslate"><span class="pre">NDArray</span></code>.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.Lambda.forward"> |
| <code class="sig-name descname">forward</code><span class="sig-paren">(</span><em class="sig-param">*args</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/basic_layers.html#Lambda.forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Lambda.forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to implement forward computation using <code class="xref py py-class docutils literal notranslate"><span class="pre">NDArray</span></code>. Only |
| accepts positional arguments.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><p><strong>*args</strong> (<em>list of NDArray</em>) – Input tensors.</p> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.LayerNorm"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">LayerNorm</code><span class="sig-paren">(</span><em class="sig-param">axis=-1</em>, <em class="sig-param">epsilon=1e-05</em>, <em class="sig-param">center=True</em>, <em class="sig-param">scale=True</em>, <em class="sig-param">beta_initializer='zeros'</em>, <em class="sig-param">gamma_initializer='ones'</em>, <em class="sig-param">in_channels=0</em>, <em class="sig-param">prefix=None</em>, <em class="sig-param">params=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/basic_layers.html#LayerNorm"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.LayerNorm" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.block.HybridBlock</span></code></p> |
| <p>Applies layer normalization to the n-dimensional input array. |
| This operator takes an n-dimensional input array and normalizes |
| the input using the given axis:</p> |
| <div class="math notranslate nohighlight"> |
| \[out = \frac{x - mean[data, axis]}{ \sqrt{Var[data, axis] + \epsilon}} * gamma + beta\]</div> |
| <p><strong>Methods</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.LayerNorm.hybrid_forward" title="mxnet.gluon.nn.LayerNorm.hybrid_forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybrid_forward</span></code></a>(F, data, gamma, beta)</p></td> |
| <td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>axis</strong> (<em>int</em><em>, </em><em>default -1</em>) – The axis that should be normalized. This is typically the axis of the channels.</p></li> |
| <li><p><strong>epsilon</strong> (<em>float</em><em>, </em><em>default 1e-5</em>) – Small float added to variance to avoid dividing by zero.</p></li> |
| <li><p><strong>center</strong> (<em>bool</em><em>, </em><em>default True</em>) – If True, add offset of <cite>beta</cite> to normalized tensor. |
| If False, <cite>beta</cite> is ignored.</p></li> |
| <li><p><strong>scale</strong> (<em>bool</em><em>, </em><em>default True</em>) – If True, multiply by <cite>gamma</cite>. If False, <cite>gamma</cite> is not used.</p></li> |
| <li><p><strong>beta_initializer</strong> (str or <cite>Initializer</cite>, default ‘zeros’) – Initializer for the beta weight.</p></li> |
| <li><p><strong>gamma_initializer</strong> (str or <cite>Initializer</cite>, default ‘ones’) – Initializer for the gamma weight.</p></li> |
| <li><p><strong>in_channels</strong> (<em>int</em><em>, </em><em>default 0</em>) – Number of channels (feature maps) in input data. If not specified, |
| initialization will be deferred to the first time <cite>forward</cite> is called |
| and <cite>in_channels</cite> will be inferred from the shape of input data.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="simple"> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>data</strong>: input tensor with arbitrary shape.</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul class="simple"> |
| <li><p><strong>out</strong>: output tensor with the same shape as <cite>data</cite>.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <p class="rubric">References</p> |
| <p><a class="reference external" href="https://arxiv.org/pdf/1607.06450.pdf">Layer Normalization</a></p> |
| <p class="rubric">Examples</p> |
| <div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="c1"># Input of shape (2, 5)</span> |
| <span class="gp">>>> </span><span class="n">x</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">array</span><span class="p">([[</span><span class="mi">1</span><span class="p">,</span> <span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">4</span><span class="p">,</span> <span class="mi">5</span><span class="p">],</span> <span class="p">[</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">2</span><span class="p">,</span> <span class="mi">2</span><span class="p">,</span> <span class="mi">2</span><span class="p">]])</span> |
| <span class="gp">>>> </span><span class="c1"># Layer normalization is calculated with the above formula</span> |
| <span class="gp">>>> </span><span class="n">layer</span> <span class="o">=</span> <span class="n">LayerNorm</span><span class="p">()</span> |
| <span class="gp">>>> </span><span class="n">layer</span><span class="o">.</span><span class="n">initialize</span><span class="p">(</span><span class="n">ctx</span><span class="o">=</span><span class="n">mx</span><span class="o">.</span><span class="n">cpu</span><span class="p">(</span><span class="mi">0</span><span class="p">))</span> |
| <span class="gp">>>> </span><span class="n">layer</span><span class="p">(</span><span class="n">x</span><span class="p">)</span> |
| <span class="go">[[-1.41421 -0.707105 0. 0.707105 1.41421 ]</span> |
| <span class="go"> [-1.2247195 -1.2247195 0.81647956 0.81647956 0.81647956]]</span> |
| <span class="go"><NDArray 2x5 @cpu(0)></span> |
| </pre></div> |
| </div> |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.LayerNorm.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">data</em>, <em class="sig-param">gamma</em>, <em class="sig-param">beta</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/basic_layers.html#LayerNorm.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.LayerNorm.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.LeakyReLU"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">LeakyReLU</code><span class="sig-paren">(</span><em class="sig-param">alpha</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/activations.html#LeakyReLU"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.LeakyReLU" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.block.HybridBlock</span></code></p> |
| <p>Leaky version of a Rectified Linear Unit.</p> |
| <p>It allows a small gradient when the unit is not active</p> |
| <div class="math notranslate nohighlight"> |
| \[\begin{split}f\left(x\right) = \left\{ |
| \begin{array}{lr} |
| \alpha x & : x \lt 0 \\ |
| x & : x \geq 0 \\ |
| \end{array} |
| \right.\\\end{split}\]</div> |
| <p><strong>Methods</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.LeakyReLU.hybrid_forward" title="mxnet.gluon.nn.LeakyReLU.hybrid_forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybrid_forward</span></code></a>(F, x)</p></td> |
| <td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><p><strong>alpha</strong> (<em>float</em>) – slope coefficient for the negative half axis. Must be >= 0.</p> |
| </dd> |
| </dl> |
| <dl class="simple"> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>data</strong>: input tensor with arbitrary shape.</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul class="simple"> |
| <li><p><strong>out</strong>: output tensor with the same shape as <cite>data</cite>.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.LeakyReLU.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">x</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/activations.html#LeakyReLU.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.LeakyReLU.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.MaxPool1D"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">MaxPool1D</code><span class="sig-paren">(</span><em class="sig-param">pool_size=2</em>, <em class="sig-param">strides=None</em>, <em class="sig-param">padding=0</em>, <em class="sig-param">layout='NCW'</em>, <em class="sig-param">ceil_mode=False</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/conv_layers.html#MaxPool1D"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.MaxPool1D" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.nn.conv_layers._Pooling</span></code></p> |
| <p>Max pooling operation for one dimensional data.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>pool_size</strong> (<em>int</em>) – Size of the max pooling windows.</p></li> |
| <li><p><strong>strides</strong> (<em>int</em><em>, or </em><em>None</em>) – Factor by which to downscale. E.g. 2 will halve the input size. |
| If <cite>None</cite>, it will default to <cite>pool_size</cite>.</p></li> |
| <li><p><strong>padding</strong> (<em>int</em>) – If padding is non-zero, then the input is implicitly |
| zero-padded on both sides for padding number of points.</p></li> |
| <li><p><strong>layout</strong> (<em>str</em><em>, </em><em>default 'NCW'</em>) – Dimension ordering of data and out (‘NCW’ or ‘NWC’). |
| ‘N’, ‘C’, ‘W’ stands for batch, channel, and width (time) dimensions |
| respectively. Pooling is applied on the W dimension.</p></li> |
| <li><p><strong>ceil_mode</strong> (<em>bool</em><em>, </em><em>default False</em>) – When <cite>True</cite>, will use ceil instead of floor to compute the output shape.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>data</strong>: 3D input tensor with shape <cite>(batch_size, in_channels, width)</cite> |
| when <cite>layout</cite> is <cite>NCW</cite>. For other layouts shape is permuted accordingly.</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul> |
| <li><p><strong>out</strong>: 3D output tensor with shape <cite>(batch_size, channels, out_width)</cite> |
| when <cite>layout</cite> is <cite>NCW</cite>. out_width is calculated as:</p> |
| <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">out_width</span> <span class="o">=</span> <span class="n">floor</span><span class="p">((</span><span class="n">width</span><span class="o">+</span><span class="mi">2</span><span class="o">*</span><span class="n">padding</span><span class="o">-</span><span class="n">pool_size</span><span class="p">)</span><span class="o">/</span><span class="n">strides</span><span class="p">)</span><span class="o">+</span><span class="mi">1</span> |
| </pre></div> |
| </div> |
| <p>When <cite>ceil_mode</cite> is <cite>True</cite>, ceil will be used instead of floor in this |
| equation.</p> |
| </li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.MaxPool2D"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">MaxPool2D</code><span class="sig-paren">(</span><em class="sig-param">pool_size=(2</em>, <em class="sig-param">2)</em>, <em class="sig-param">strides=None</em>, <em class="sig-param">padding=0</em>, <em class="sig-param">layout='NCHW'</em>, <em class="sig-param">ceil_mode=False</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/conv_layers.html#MaxPool2D"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.MaxPool2D" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.nn.conv_layers._Pooling</span></code></p> |
| <p>Max pooling operation for two dimensional (spatial) data.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>pool_size</strong> (<em>int</em><em> or </em><em>list/tuple of 2 ints</em><em>,</em>) – Size of the max pooling windows.</p></li> |
| <li><p><strong>strides</strong> (<em>int</em><em>, </em><em>list/tuple of 2 ints</em><em>, or </em><em>None.</em>) – Factor by which to downscale. E.g. 2 will halve the input size. |
| If <cite>None</cite>, it will default to <cite>pool_size</cite>.</p></li> |
| <li><p><strong>padding</strong> (<em>int</em><em> or </em><em>list/tuple of 2 ints</em><em>,</em>) – If padding is non-zero, then the input is implicitly |
| zero-padded on both sides for padding number of points.</p></li> |
| <li><p><strong>layout</strong> (<em>str</em><em>, </em><em>default 'NCHW'</em>) – Dimension ordering of data and out (‘NCHW’ or ‘NHWC’). |
| ‘N’, ‘C’, ‘H’, ‘W’ stands for batch, channel, height, and width |
| dimensions respectively. padding is applied on ‘H’ and ‘W’ dimension.</p></li> |
| <li><p><strong>ceil_mode</strong> (<em>bool</em><em>, </em><em>default False</em>) – When <cite>True</cite>, will use ceil instead of floor to compute the output shape.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>data</strong>: 4D input tensor with shape |
| <cite>(batch_size, in_channels, height, width)</cite> when <cite>layout</cite> is <cite>NCHW</cite>. |
| For other layouts shape is permuted accordingly.</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul> |
| <li><p><strong>out</strong>: 4D output tensor with shape |
| <cite>(batch_size, channels, out_height, out_width)</cite> when <cite>layout</cite> is <cite>NCHW</cite>. |
| out_height and out_width are calculated as:</p> |
| <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">out_height</span> <span class="o">=</span> <span class="n">floor</span><span class="p">((</span><span class="n">height</span><span class="o">+</span><span class="mi">2</span><span class="o">*</span><span class="n">padding</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">-</span><span class="n">pool_size</span><span class="p">[</span><span class="mi">0</span><span class="p">])</span><span class="o">/</span><span class="n">strides</span><span class="p">[</span><span class="mi">0</span><span class="p">])</span><span class="o">+</span><span class="mi">1</span> |
| <span class="n">out_width</span> <span class="o">=</span> <span class="n">floor</span><span class="p">((</span><span class="n">width</span><span class="o">+</span><span class="mi">2</span><span class="o">*</span><span class="n">padding</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span><span class="o">-</span><span class="n">pool_size</span><span class="p">[</span><span class="mi">1</span><span class="p">])</span><span class="o">/</span><span class="n">strides</span><span class="p">[</span><span class="mi">1</span><span class="p">])</span><span class="o">+</span><span class="mi">1</span> |
| </pre></div> |
| </div> |
| <p>When <cite>ceil_mode</cite> is <cite>True</cite>, ceil will be used instead of floor in this |
| equation.</p> |
| </li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.MaxPool3D"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">MaxPool3D</code><span class="sig-paren">(</span><em class="sig-param">pool_size=(2</em>, <em class="sig-param">2</em>, <em class="sig-param">2)</em>, <em class="sig-param">strides=None</em>, <em class="sig-param">padding=0</em>, <em class="sig-param">ceil_mode=False</em>, <em class="sig-param">layout='NCDHW'</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/conv_layers.html#MaxPool3D"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.MaxPool3D" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.nn.conv_layers._Pooling</span></code></p> |
| <p>Max pooling operation for 3D data (spatial or spatio-temporal).</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>pool_size</strong> (<em>int</em><em> or </em><em>list/tuple of 3 ints</em><em>,</em>) – Size of the max pooling windows.</p></li> |
| <li><p><strong>strides</strong> (<em>int</em><em>, </em><em>list/tuple of 3 ints</em><em>, or </em><em>None.</em>) – Factor by which to downscale. E.g. 2 will halve the input size. |
| If <cite>None</cite>, it will default to <cite>pool_size</cite>.</p></li> |
| <li><p><strong>padding</strong> (<em>int</em><em> or </em><em>list/tuple of 3 ints</em><em>,</em>) – If padding is non-zero, then the input is implicitly |
| zero-padded on both sides for padding number of points.</p></li> |
| <li><p><strong>layout</strong> (<em>str</em><em>, </em><em>default 'NCDHW'</em>) – Dimension ordering of data and out (‘NCDHW’ or ‘NDHWC’). |
| ‘N’, ‘C’, ‘H’, ‘W’, ‘D’ stands for batch, channel, height, width and |
| depth dimensions respectively. padding is applied on ‘D’, ‘H’ and ‘W’ |
| dimension.</p></li> |
| <li><p><strong>ceil_mode</strong> (<em>bool</em><em>, </em><em>default False</em>) – When <cite>True</cite>, will use ceil instead of floor to compute the output shape.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>data</strong>: 5D input tensor with shape |
| <cite>(batch_size, in_channels, depth, height, width)</cite> when <cite>layout</cite> is <cite>NCW</cite>. |
| For other layouts shape is permuted accordingly.</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul> |
| <li><p><strong>out</strong>: 5D output tensor with shape |
| <cite>(batch_size, channels, out_depth, out_height, out_width)</cite> when <cite>layout</cite> is <cite>NCDHW</cite>. |
| out_depth, out_height and out_width are calculated as:</p> |
| <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">out_depth</span> <span class="o">=</span> <span class="n">floor</span><span class="p">((</span><span class="n">depth</span><span class="o">+</span><span class="mi">2</span><span class="o">*</span><span class="n">padding</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">-</span><span class="n">pool_size</span><span class="p">[</span><span class="mi">0</span><span class="p">])</span><span class="o">/</span><span class="n">strides</span><span class="p">[</span><span class="mi">0</span><span class="p">])</span><span class="o">+</span><span class="mi">1</span> |
| <span class="n">out_height</span> <span class="o">=</span> <span class="n">floor</span><span class="p">((</span><span class="n">height</span><span class="o">+</span><span class="mi">2</span><span class="o">*</span><span class="n">padding</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span><span class="o">-</span><span class="n">pool_size</span><span class="p">[</span><span class="mi">1</span><span class="p">])</span><span class="o">/</span><span class="n">strides</span><span class="p">[</span><span class="mi">1</span><span class="p">])</span><span class="o">+</span><span class="mi">1</span> |
| <span class="n">out_width</span> <span class="o">=</span> <span class="n">floor</span><span class="p">((</span><span class="n">width</span><span class="o">+</span><span class="mi">2</span><span class="o">*</span><span class="n">padding</span><span class="p">[</span><span class="mi">2</span><span class="p">]</span><span class="o">-</span><span class="n">pool_size</span><span class="p">[</span><span class="mi">2</span><span class="p">])</span><span class="o">/</span><span class="n">strides</span><span class="p">[</span><span class="mi">2</span><span class="p">])</span><span class="o">+</span><span class="mi">1</span> |
| </pre></div> |
| </div> |
| <p>When <cite>ceil_mode</cite> is <cite>True</cite>, ceil will be used instead of floor in this |
| equation.</p> |
| </li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.PReLU"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">PReLU</code><span class="sig-paren">(</span><em class="sig-param">alpha_initializer=<mxnet.initializer.Constant object></em>, <em class="sig-param">in_channels=1</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/activations.html#PReLU"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.PReLU" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.block.HybridBlock</span></code></p> |
| <p>Parametric leaky version of a Rectified Linear Unit. |
| <<a class="reference external" href="https://arxiv.org/abs/1502.01852">https://arxiv.org/abs/1502.01852</a>>`_ paper.</p> |
| <p>It learns a gradient when the unit is not active</p> |
| <div class="math notranslate nohighlight"> |
| \[\begin{split}f\left(x\right) = \left\{ |
| \begin{array}{lr} |
| \alpha x & : x \lt 0 \\ |
| x & : x \geq 0 \\ |
| \end{array} |
| \right.\\\end{split}\]</div> |
| <p><strong>Methods</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.PReLU.hybrid_forward" title="mxnet.gluon.nn.PReLU.hybrid_forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybrid_forward</span></code></a>(F, x, alpha)</p></td> |
| <td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <p>where alpha is a learned parameter.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>alpha_initializer</strong> (<a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Initializer for the <cite>embeddings</cite> matrix.</p></li> |
| <li><p><strong>in_channels</strong> (<em>int</em><em>, </em><em>default 1</em>) – Number of channels (alpha parameters) to learn. Can either be 1 |
| or <cite>n</cite> where <cite>n</cite> is the size of the second dimension of the input |
| tensor.</p></li> |
| <li><p><strong>Inputs</strong> – <ul> |
| <li><p><strong>data</strong>: input tensor with arbitrary shape.</p></li> |
| </ul> |
| </p></li> |
| <li><p><strong>Outputs</strong> – <ul> |
| <li><p><strong>out</strong>: output tensor with the same shape as <cite>data</cite>.</p></li> |
| </ul> |
| </p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.PReLU.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">x</em>, <em class="sig-param">alpha</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/activations.html#PReLU.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.PReLU.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.ReflectionPad2D"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">ReflectionPad2D</code><span class="sig-paren">(</span><em class="sig-param">padding=0</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/conv_layers.html#ReflectionPad2D"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.ReflectionPad2D" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.block.HybridBlock</span></code></p> |
| <p>Pads the input tensor using the reflection of the input boundary.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><p><strong>padding</strong> (<em>int</em>) – An integer padding size</p> |
| </dd> |
| </dl> |
| <p><strong>Methods</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.ReflectionPad2D.hybrid_forward" title="mxnet.gluon.nn.ReflectionPad2D.hybrid_forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybrid_forward</span></code></a>(F, x)</p></td> |
| <td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <dl> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>data</strong>: input tensor with the shape <span class="math notranslate nohighlight">\((N, C, H_{in}, W_{in})\)</span>.</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul> |
| <li><p><strong>out</strong>: output tensor with the shape <span class="math notranslate nohighlight">\((N, C, H_{out}, W_{out})\)</span>, where</p> |
| <div class="math notranslate nohighlight"> |
| \[ \begin{align}\begin{aligned}H_{out} = H_{in} + 2 \cdot padding\\W_{out} = W_{in} + 2 \cdot padding\end{aligned}\end{align} \]</div> |
| </li> |
| </ul> |
| </dd> |
| </dl> |
| <p class="rubric">Examples</p> |
| <div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="n">m</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">ReflectionPad2D</span><span class="p">(</span><span class="mi">3</span><span class="p">)</span> |
| <span class="gp">>>> </span><span class="nb">input</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">normal</span><span class="p">(</span><span class="n">shape</span><span class="o">=</span><span class="p">(</span><span class="mi">16</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">224</span><span class="p">,</span> <span class="mi">224</span><span class="p">))</span> |
| <span class="gp">>>> </span><span class="n">output</span> <span class="o">=</span> <span class="n">m</span><span class="p">(</span><span class="nb">input</span><span class="p">)</span> |
| </pre></div> |
| </div> |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.ReflectionPad2D.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">x</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/conv_layers.html#ReflectionPad2D.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.ReflectionPad2D.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.SELU"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">SELU</code><span class="sig-paren">(</span><em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/activations.html#SELU"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.SELU" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.block.HybridBlock</span></code></p> |
| <dl class="simple"> |
| <dt>Scaled Exponential Linear Unit (SELU)</dt><dd><p>“Self-Normalizing Neural Networks”, Klambauer et al, 2017 |
| <a class="reference external" href="https://arxiv.org/abs/1706.02515">https://arxiv.org/abs/1706.02515</a></p> |
| </dd> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>data</strong>: input tensor with arbitrary shape.</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul class="simple"> |
| <li><p><strong>out</strong>: output tensor with the same shape as <cite>data</cite>.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <p><strong>Methods</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.SELU.hybrid_forward" title="mxnet.gluon.nn.SELU.hybrid_forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybrid_forward</span></code></a>(F, x)</p></td> |
| <td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.SELU.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">x</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/activations.html#SELU.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.SELU.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.Sequential"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">Sequential</code><span class="sig-paren">(</span><em class="sig-param">prefix=None</em>, <em class="sig-param">params=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/basic_layers.html#Sequential"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Sequential" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.block.Block</span></code></p> |
| <p>Stacks Blocks sequentially.</p> |
| <p>Example:</p> |
| <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">net</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Sequential</span><span class="p">()</span> |
| <span class="c1"># use net's name_scope to give child Blocks appropriate names.</span> |
| <span class="k">with</span> <span class="n">net</span><span class="o">.</span><span class="n">name_scope</span><span class="p">():</span> |
| <span class="n">net</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">10</span><span class="p">,</span> <span class="n">activation</span><span class="o">=</span><span class="s1">'relu'</span><span class="p">))</span> |
| <span class="n">net</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">))</span> |
| </pre></div> |
| </div> |
| <p><strong>Methods</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Sequential.add" title="mxnet.gluon.nn.Sequential.add"><code class="xref py py-obj docutils literal notranslate"><span class="pre">add</span></code></a>(*blocks)</p></td> |
| <td><p>Adds block on top of the stack.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Sequential.forward" title="mxnet.gluon.nn.Sequential.forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">forward</span></code></a>(x)</p></td> |
| <td><p>Overrides to implement forward computation using <code class="xref py py-class docutils literal notranslate"><span class="pre">NDArray</span></code>.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Sequential.hybridize" title="mxnet.gluon.nn.Sequential.hybridize"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybridize</span></code></a>([active])</p></td> |
| <td><p>Activates or deactivates <cite>HybridBlock</cite> s recursively.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.Sequential.add"> |
| <code class="sig-name descname">add</code><span class="sig-paren">(</span><em class="sig-param">*blocks</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/basic_layers.html#Sequential.add"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Sequential.add" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Adds block on top of the stack.</p> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.Sequential.forward"> |
| <code class="sig-name descname">forward</code><span class="sig-paren">(</span><em class="sig-param">x</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/basic_layers.html#Sequential.forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Sequential.forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to implement forward computation using <code class="xref py py-class docutils literal notranslate"><span class="pre">NDArray</span></code>. Only |
| accepts positional arguments.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><p><strong>*args</strong> (<em>list of NDArray</em>) – Input tensors.</p> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.Sequential.hybridize"> |
| <code class="sig-name descname">hybridize</code><span class="sig-paren">(</span><em class="sig-param">active=True</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/basic_layers.html#Sequential.hybridize"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Sequential.hybridize" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Activates or deactivates <cite>HybridBlock</cite> s recursively. Has no effect on |
| non-hybrid children.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>active</strong> (<em>bool</em><em>, </em><em>default True</em>) – Whether to turn hybrid on or off.</p></li> |
| <li><p><strong>**kwargs</strong> (<em>string</em>) – Additional flags for hybridized operator.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.Swish"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">Swish</code><span class="sig-paren">(</span><em class="sig-param">beta=1.0</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/activations.html#Swish"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Swish" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.block.HybridBlock</span></code></p> |
| <dl class="simple"> |
| <dt>Swish Activation function</dt><dd><p><a class="reference external" href="https://arxiv.org/pdf/1710.05941.pdf">https://arxiv.org/pdf/1710.05941.pdf</a></p> |
| </dd> |
| </dl> |
| <p><strong>Methods</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.Swish.hybrid_forward" title="mxnet.gluon.nn.Swish.hybrid_forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybrid_forward</span></code></a>(F, x)</p></td> |
| <td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><p><strong>beta</strong> (<em>float</em>) – swish(x) = x * sigmoid(beta*x)</p> |
| </dd> |
| </dl> |
| <dl class="simple"> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>data</strong>: input tensor with arbitrary shape.</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul class="simple"> |
| <li><p><strong>out</strong>: output tensor with the same shape as <cite>data</cite>.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.Swish.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">x</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/nn/activations.html#Swish.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.Swish.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.nn.SymbolBlock"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.nn.</code><code class="sig-name descname">SymbolBlock</code><span class="sig-paren">(</span><em class="sig-param">outputs</em>, <em class="sig-param">inputs</em>, <em class="sig-param">params=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/block.html#SymbolBlock"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.SymbolBlock" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.block.HybridBlock</span></code></p> |
| <p>Construct block from symbol. This is useful for using pre-trained models |
| as feature extractors. For example, you may want to extract the output |
| from fc2 layer in AlexNet.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>outputs</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><em>list of Symbol</em>) – The desired output for SymbolBlock.</p></li> |
| <li><p><strong>inputs</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><em>list of Symbol</em>) – The Variables in output’s argument that should be used as inputs.</p></li> |
| <li><p><strong>params</strong> (<a class="reference internal" href="../parameter_dict.html#mxnet.gluon.ParameterDict" title="mxnet.gluon.ParameterDict"><em>ParameterDict</em></a>) – Parameter dictionary for arguments and auxililary states of outputs |
| that are not inputs.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <p><strong>Methods</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.SymbolBlock.cast" title="mxnet.gluon.nn.SymbolBlock.cast"><code class="xref py py-obj docutils literal notranslate"><span class="pre">cast</span></code></a>(dtype)</p></td> |
| <td><p>Cast this Block to use another data type.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.SymbolBlock.forward" title="mxnet.gluon.nn.SymbolBlock.forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">forward</span></code></a>(x, *args)</p></td> |
| <td><p>Defines the forward computation.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.SymbolBlock.hybrid_forward" title="mxnet.gluon.nn.SymbolBlock.hybrid_forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybrid_forward</span></code></a>(F, x, *args, **kwargs)</p></td> |
| <td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.nn.SymbolBlock.imports" title="mxnet.gluon.nn.SymbolBlock.imports"><code class="xref py py-obj docutils literal notranslate"><span class="pre">imports</span></code></a>(symbol_file, input_names[, …])</p></td> |
| <td><p>Import model previously saved by <cite>gluon.HybridBlock.export</cite> or <cite>Module.save_checkpoint</cite> as a <cite>gluon.SymbolBlock</cite> for use in Gluon.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.nn.SymbolBlock.reset_ctx" title="mxnet.gluon.nn.SymbolBlock.reset_ctx"><code class="xref py py-obj docutils literal notranslate"><span class="pre">reset_ctx</span></code></a>(ctx)</p></td> |
| <td><p>Re-assign all Parameters to other contexts.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <p class="rubric">Examples</p> |
| <div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="c1"># To extract the feature from fc1 and fc2 layers of AlexNet:</span> |
| <span class="gp">>>> </span><span class="n">alexnet</span> <span class="o">=</span> <span class="n">gluon</span><span class="o">.</span><span class="n">model_zoo</span><span class="o">.</span><span class="n">vision</span><span class="o">.</span><span class="n">alexnet</span><span class="p">(</span><span class="n">pretrained</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span> <span class="n">ctx</span><span class="o">=</span><span class="n">mx</span><span class="o">.</span><span class="n">cpu</span><span class="p">(),</span> |
| <span class="go"> prefix='model_')</span> |
| <span class="gp">>>> </span><span class="n">inputs</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">var</span><span class="p">(</span><span class="s1">'data'</span><span class="p">)</span> |
| <span class="gp">>>> </span><span class="n">out</span> <span class="o">=</span> <span class="n">alexnet</span><span class="p">(</span><span class="n">inputs</span><span class="p">)</span> |
| <span class="gp">>>> </span><span class="n">internals</span> <span class="o">=</span> <span class="n">out</span><span class="o">.</span><span class="n">get_internals</span><span class="p">()</span> |
| <span class="gp">>>> </span><span class="nb">print</span><span class="p">(</span><span class="n">internals</span><span class="o">.</span><span class="n">list_outputs</span><span class="p">())</span> |
| <span class="go">['data', ..., 'model_dense0_relu_fwd_output', ..., 'model_dense1_relu_fwd_output', ...]</span> |
| <span class="gp">>>> </span><span class="n">outputs</span> <span class="o">=</span> <span class="p">[</span><span class="n">internals</span><span class="p">[</span><span class="s1">'model_dense0_relu_fwd_output'</span><span class="p">],</span> |
| <span class="go"> internals['model_dense1_relu_fwd_output']]</span> |
| <span class="gp">>>> </span><span class="c1"># Create SymbolBlock that shares parameters with alexnet</span> |
| <span class="gp">>>> </span><span class="n">feat_model</span> <span class="o">=</span> <span class="n">gluon</span><span class="o">.</span><span class="n">SymbolBlock</span><span class="p">(</span><span class="n">outputs</span><span class="p">,</span> <span class="n">inputs</span><span class="p">,</span> <span class="n">params</span><span class="o">=</span><span class="n">alexnet</span><span class="o">.</span><span class="n">collect_params</span><span class="p">())</span> |
| <span class="gp">>>> </span><span class="n">x</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">normal</span><span class="p">(</span><span class="n">shape</span><span class="o">=</span><span class="p">(</span><span class="mi">16</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">224</span><span class="p">,</span> <span class="mi">224</span><span class="p">))</span> |
| <span class="gp">>>> </span><span class="nb">print</span><span class="p">(</span><span class="n">feat_model</span><span class="p">(</span><span class="n">x</span><span class="p">))</span> |
| </pre></div> |
| </div> |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.SymbolBlock.cast"> |
| <code class="sig-name descname">cast</code><span class="sig-paren">(</span><em class="sig-param">dtype</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/block.html#SymbolBlock.cast"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.SymbolBlock.cast" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Cast this Block to use another data type.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><p><strong>dtype</strong> (<em>str</em><em> or </em><em>numpy.dtype</em>) – The new data type.</p> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.SymbolBlock.forward"> |
| <code class="sig-name descname">forward</code><span class="sig-paren">(</span><em class="sig-param">x</em>, <em class="sig-param">*args</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/block.html#SymbolBlock.forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.SymbolBlock.forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Defines the forward computation. Arguments can be either |
| <code class="xref py py-class docutils literal notranslate"><span class="pre">NDArray</span></code> or <code class="xref py py-class docutils literal notranslate"><span class="pre">Symbol</span></code>.</p> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.SymbolBlock.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">x</em>, <em class="sig-param">*args</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/block.html#SymbolBlock.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.SymbolBlock.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.SymbolBlock.imports"> |
| <em class="property">static </em><code class="sig-name descname">imports</code><span class="sig-paren">(</span><em class="sig-param">symbol_file</em>, <em class="sig-param">input_names</em>, <em class="sig-param">param_file=None</em>, <em class="sig-param">ctx=None</em>, <em class="sig-param">allow_missing=False</em>, <em class="sig-param">ignore_extra=False</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/block.html#SymbolBlock.imports"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.SymbolBlock.imports" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Import model previously saved by <cite>gluon.HybridBlock.export</cite> or |
| <cite>Module.save_checkpoint</cite> as a <cite>gluon.SymbolBlock</cite> for use in Gluon.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>symbol_file</strong> (<em>str</em>) – Path to symbol file.</p></li> |
| <li><p><strong>input_names</strong> (<em>list of str</em>) – List of input variable names</p></li> |
| <li><p><strong>param_file</strong> (<em>str</em><em>, </em><em>optional</em>) – Path to parameter file.</p></li> |
| <li><p><strong>ctx</strong> (<a class="reference internal" href="../../mxnet/context/index.html#mxnet.context.Context" title="mxnet.context.Context"><em>Context</em></a><em>, </em><em>default None</em>) – The context to initialize <cite>gluon.SymbolBlock</cite> on.</p></li> |
| <li><p><strong>allow_missing</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently skip loading parameters not represents in the file.</p></li> |
| <li><p><strong>ignore_extra</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently ignore parameters from the file that are not |
| present in this Block.</p></li> |
| </ul> |
| </dd> |
| <dt class="field-even">Returns</dt> |
| <dd class="field-even"><p><cite>gluon.SymbolBlock</cite> loaded from symbol and parameter files.</p> |
| </dd> |
| <dt class="field-odd">Return type</dt> |
| <dd class="field-odd"><p><a class="reference internal" href="../symbol_block.html#mxnet.gluon.SymbolBlock" title="mxnet.gluon.SymbolBlock">gluon.SymbolBlock</a></p> |
| </dd> |
| </dl> |
| <p class="rubric">Examples</p> |
| <div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="n">net1</span> <span class="o">=</span> <span class="n">gluon</span><span class="o">.</span><span class="n">model_zoo</span><span class="o">.</span><span class="n">vision</span><span class="o">.</span><span class="n">resnet18_v1</span><span class="p">(</span> |
| <span class="gp">... </span> <span class="n">prefix</span><span class="o">=</span><span class="s1">'resnet'</span><span class="p">,</span> <span class="n">pretrained</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span> |
| <span class="gp">>>> </span><span class="n">net1</span><span class="o">.</span><span class="n">hybridize</span><span class="p">()</span> |
| <span class="gp">>>> </span><span class="n">x</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">normal</span><span class="p">(</span><span class="n">shape</span><span class="o">=</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">32</span><span class="p">,</span> <span class="mi">32</span><span class="p">))</span> |
| <span class="gp">>>> </span><span class="n">out1</span> <span class="o">=</span> <span class="n">net1</span><span class="p">(</span><span class="n">x</span><span class="p">)</span> |
| <span class="gp">>>> </span><span class="n">net1</span><span class="o">.</span><span class="n">export</span><span class="p">(</span><span class="s1">'net1'</span><span class="p">,</span> <span class="n">epoch</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span> |
| <span class="go">>>></span> |
| <span class="gp">>>> </span><span class="n">net2</span> <span class="o">=</span> <span class="n">gluon</span><span class="o">.</span><span class="n">SymbolBlock</span><span class="o">.</span><span class="n">imports</span><span class="p">(</span> |
| <span class="gp">... </span> <span class="s1">'net1-symbol.json'</span><span class="p">,</span> <span class="p">[</span><span class="s1">'data'</span><span class="p">],</span> <span class="s1">'net1-0001.params'</span><span class="p">)</span> |
| <span class="gp">>>> </span><span class="n">out2</span> <span class="o">=</span> <span class="n">net2</span><span class="p">(</span><span class="n">x</span><span class="p">)</span> |
| </pre></div> |
| </div> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.nn.SymbolBlock.reset_ctx"> |
| <code class="sig-name descname">reset_ctx</code><span class="sig-paren">(</span><em class="sig-param">ctx</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/block.html#SymbolBlock.reset_ctx"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.nn.SymbolBlock.reset_ctx" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Re-assign all Parameters to other contexts. If the Block is hybridized, it will reset the _cached_op_args. |
| :param ctx: Assign Parameter to given context. If ctx is a list of Context, a</p> |
| <blockquote> |
| <div><p>copy will be made for each context.</p> |
| </div></blockquote> |
| <dl class="field-list simple"> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| </div> |
| </div> |
| |
| |
| <hr class="feedback-hr-top" /> |
| <div class="feedback-container"> |
| <div class="feedback-question">Did this page help you?</div> |
| <div class="feedback-answer-container"> |
| <div class="feedback-answer yes-link" data-response="yes">Yes</div> |
| <div class="feedback-answer no-link" data-response="no">No</div> |
| </div> |
| <div class="feedback-thank-you">Thanks for your feedback!</div> |
| </div> |
| <hr class="feedback-hr-bottom" /> |
| </div> |
| <div class="side-doc-outline"> |
| <div class="side-doc-outline--content"> |
| <div class="localtoc"> |
| <p class="caption"> |
| <span class="caption-text">Table Of Contents</span> |
| </p> |
| <ul> |
| <li><a class="reference internal" href="#">gluon.nn</a><ul> |
| <li><a class="reference internal" href="#sequential-containers">Sequential containers</a></li> |
| <li><a class="reference internal" href="#basic-layers">Basic Layers</a></li> |
| <li><a class="reference internal" href="#convolutional-layers">Convolutional Layers</a></li> |
| <li><a class="reference internal" href="#pooling-layers">Pooling Layers</a></li> |
| <li><a class="reference internal" href="#normalization-layers">Normalization Layers</a></li> |
| <li><a class="reference internal" href="#embedding-layers">Embedding Layers</a></li> |
| <li><a class="reference internal" href="#advanced-activation-layers">Advanced Activation Layers</a></li> |
| <li><a class="reference internal" href="#module-mxnet.gluon.nn">API Reference</a></li> |
| </ul> |
| </li> |
| </ul> |
| |
| </div> |
| </div> |
| </div> |
| |
| <div class="clearer"></div> |
| </div><div class="pagenation"> |
| <a id="button-prev" href="../model_zoo/index.html" class="mdl-button mdl-js-button mdl-js-ripple-effect mdl-button--colored" role="botton" accesskey="P"> |
| <i class="pagenation-arrow-L fas fa-arrow-left fa-lg"></i> |
| <div class="pagenation-text"> |
| <span class="pagenation-direction">Previous</span> |
| <div>gluon.model_zoo.vision</div> |
| </div> |
| </a> |
| <a id="button-next" href="../rnn/index.html" class="mdl-button mdl-js-button mdl-js-ripple-effect mdl-button--colored" role="botton" accesskey="N"> |
| <i class="pagenation-arrow-R fas fa-arrow-right fa-lg"></i> |
| <div class="pagenation-text"> |
| <span class="pagenation-direction">Next</span> |
| <div>gluon.rnn</div> |
| </div> |
| </a> |
| </div> |
| <footer class="site-footer h-card"> |
| <div class="wrapper"> |
| <div class="row"> |
| <div class="col-4"> |
| <h4 class="footer-category-title">Resources</h4> |
| <ul class="contact-list"> |
| <li><a class="u-email" href="mailto:dev@mxnet.apache.org">Dev list</a></li> |
| <li><a class="u-email" href="mailto:user@mxnet.apache.org">User mailing list</a></li> |
| <li><a href="https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+Home">Developer Wiki</a></li> |
| <li><a href="https://issues.apache.org/jira/projects/MXNET/issues">Jira Tracker</a></li> |
| <li><a href="https://github.com/apache/incubator-mxnet/labels/Roadmap">Github Roadmap</a></li> |
| <li><a href="https://discuss.mxnet.io">MXNet Discuss forum</a></li> |
| <li><a href="/community/contribute">Contribute To MXNet</a></li> |
| |
| </ul> |
| </div> |
| |
| <div class="col-4"><ul class="social-media-list"><li><a href="https://github.com/apache/incubator-mxnet"><svg class="svg-icon"><use xlink:href="../../../_static/minima-social-icons.svg#github"></use></svg> <span class="username">apache/incubator-mxnet</span></a></li><li><a href="https://www.twitter.com/apachemxnet"><svg class="svg-icon"><use xlink:href="../../../_static/minima-social-icons.svg#twitter"></use></svg> <span class="username">apachemxnet</span></a></li><li><a href="https://youtube.com/apachemxnet"><svg class="svg-icon"><use xlink:href="../../../_static/minima-social-icons.svg#youtube"></use></svg> <span class="username">apachemxnet</span></a></li></ul> |
| </div> |
| |
| <div class="col-4 footer-text"> |
| <p>A flexible and efficient library for deep learning.</p> |
| </div> |
| </div> |
| </div> |
| </footer> |
| |
| <footer class="site-footer2"> |
| <div class="wrapper"> |
| <div class="row"> |
| <div class="col-3"> |
| <img src="../../../_static/apache_incubator_logo.png" class="footer-logo col-2"> |
| </div> |
| <div class="footer-bottom-warning col-9"> |
| <p>Apache MXNet is an effort undergoing incubation at The Apache Software Foundation (ASF), <span style="font-weight:bold">sponsored by the <i>Apache Incubator</i></span>. Incubation is required |
| of all newly accepted projects until a further review indicates that the infrastructure, |
| communications, and decision making process have stabilized in a manner consistent with other |
| successful ASF projects. While incubation status is not necessarily a reflection of the completeness |
| or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF. |
| </p><p>"Copyright © 2017-2018, The Apache Software Foundation Apache MXNet, MXNet, Apache, the Apache |
| feather, and the Apache MXNet project logo are either registered trademarks or trademarks of the |
| Apache Software Foundation."</p> |
| </div> |
| </div> |
| </div> |
| </footer> |
| |
| </body> |
| </html> |