| <!DOCTYPE html> |
| |
| <html xmlns="http://www.w3.org/1999/xhtml"> |
| <head> |
| <meta charset="utf-8" /> |
| <meta charset="utf-8"> |
| <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> |
| <meta http-equiv="x-ua-compatible" content="ie=edge"> |
| <style> |
| .dropdown { |
| position: relative; |
| display: inline-block; |
| } |
| |
| .dropdown-content { |
| display: none; |
| position: absolute; |
| background-color: #f9f9f9; |
| min-width: 160px; |
| box-shadow: 0px 8px 16px 0px rgba(0,0,0,0.2); |
| padding: 12px 16px; |
| z-index: 1; |
| text-align: left; |
| } |
| |
| .dropdown:hover .dropdown-content { |
| display: block; |
| } |
| |
| .dropdown-option:hover { |
| color: #FF4500; |
| } |
| |
| .dropdown-option-active { |
| color: #FF4500; |
| font-weight: lighter; |
| } |
| |
| .dropdown-option { |
| color: #000000; |
| font-weight: lighter; |
| } |
| |
| .dropdown-header { |
| color: #FFFFFF; |
| display: inline-flex; |
| } |
| |
| .dropdown-caret { |
| width: 18px; |
| } |
| |
| .dropdown-caret-path { |
| fill: #FFFFFF; |
| } |
| </style> |
| |
| <title>gluon.contrib — Apache MXNet documentation</title> |
| |
| <link rel="stylesheet" href="../../../_static/basic.css" type="text/css" /> |
| <link rel="stylesheet" href="../../../_static/pygments.css" type="text/css" /> |
| <link rel="stylesheet" type="text/css" href="../../../_static/mxnet.css" /> |
| <link rel="stylesheet" href="../../../_static/material-design-lite-1.3.0/material.blue-deep_orange.min.css" type="text/css" /> |
| <link rel="stylesheet" href="../../../_static/sphinx_materialdesign_theme.css" type="text/css" /> |
| <link rel="stylesheet" href="../../../_static/fontawesome/all.css" type="text/css" /> |
| <link rel="stylesheet" href="../../../_static/fonts.css" type="text/css" /> |
| <link rel="stylesheet" href="../../../_static/feedback.css" type="text/css" /> |
| <script id="documentation_options" data-url_root="../../../" src="../../../_static/documentation_options.js"></script> |
| <script src="../../../_static/jquery.js"></script> |
| <script src="../../../_static/underscore.js"></script> |
| <script src="../../../_static/doctools.js"></script> |
| <script src="../../../_static/language_data.js"></script> |
| <script src="../../../_static/google_analytics.js"></script> |
| <script src="../../../_static/autodoc.js"></script> |
| <script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script> |
| <script async="async" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.5/latest.js?config=TeX-AMS-MML_HTMLorMML"></script> |
| <script type="text/x-mathjax-config">MathJax.Hub.Config({"tex2jax": {"inlineMath": [["$", "$"], ["\\(", "\\)"]], "processEscapes": true, "ignoreClass": "document", "processClass": "math|output_area"}})</script> |
| <link rel="shortcut icon" href="../../../_static/mxnet-icon.png"/> |
| <link rel="index" title="Index" href="../../../genindex.html" /> |
| <link rel="search" title="Search" href="../../../search.html" /> |
| <link rel="next" title="gluon.data" href="../data/index.html" /> |
| <link rel="prev" title="gluon.Trainer" href="../trainer.html" /> |
| </head> |
| <body><header class="site-header" role="banner"> |
| <div class="wrapper"> |
| <a class="site-title" rel="author" href="/versions/1.8.0/"><img |
| src="../../../_static/mxnet_logo.png" class="site-header-logo"></a> |
| <nav class="site-nav"> |
| <input type="checkbox" id="nav-trigger" class="nav-trigger"/> |
| <label for="nav-trigger"> |
| <span class="menu-icon"> |
| <svg viewBox="0 0 18 15" width="18px" height="15px"> |
| <path d="M18,1.484c0,0.82-0.665,1.484-1.484,1.484H1.484C0.665,2.969,0,2.304,0,1.484l0,0C0,0.665,0.665,0,1.484,0 h15.032C17.335,0,18,0.665,18,1.484L18,1.484z M18,7.516C18,8.335,17.335,9,16.516,9H1.484C0.665,9,0,8.335,0,7.516l0,0 c0-0.82,0.665-1.484,1.484-1.484h15.032C17.335,6.031,18,6.696,18,7.516L18,7.516z M18,13.516C18,14.335,17.335,15,16.516,15H1.484 C0.665,15,0,14.335,0,13.516l0,0c0-0.82,0.665-1.483,1.484-1.483h15.032C17.335,12.031,18,12.695,18,13.516L18,13.516z"/> |
| </svg> |
| </span> |
| </label> |
| |
| <div class="trigger"> |
| <a class="page-link" href="/versions/1.8.0/get_started">Get Started</a> |
| <a class="page-link" href="/versions/1.8.0/blog">Blog</a> |
| <a class="page-link" href="/versions/1.8.0/features">Features</a> |
| <a class="page-link" href="/versions/1.8.0/ecosystem">Ecosystem</a> |
| <a class="page-link page-current" href="/versions/1.8.0/api">Docs & Tutorials</a> |
| <a class="page-link" href="https://github.com/apache/incubator-mxnet">GitHub</a> |
| <div class="dropdown"> |
| <span class="dropdown-header">1.8.0 |
| <svg class="dropdown-caret" viewBox="0 0 32 32" class="icon icon-caret-bottom" aria-hidden="true"><path class="dropdown-caret-path" d="M24 11.305l-7.997 11.39L8 11.305z"></path></svg> |
| </span> |
| <div class="dropdown-content"> |
| <a class="dropdown-option" href="/">master</a><br> |
| <a class="dropdown-option-active" href="/versions/1.8.0/">1.8.0</a><br> |
| <a class="dropdown-option" href="/versions/1.7.0/">1.7.0</a><br> |
| <a class="dropdown-option" href="/versions/1.6.0/">1.6.0</a><br> |
| <a class="dropdown-option" href="/versions/1.5.0/">1.5.0</a><br> |
| <a class="dropdown-option" href="/versions/1.4.1/">1.4.1</a><br> |
| <a class="dropdown-option" href="/versions/1.3.1/">1.3.1</a><br> |
| <a class="dropdown-option" href="/versions/1.2.1/">1.2.1</a><br> |
| <a class="dropdown-option" href="/versions/1.1.0/">1.1.0</a><br> |
| <a class="dropdown-option" href="/versions/1.0.0/">1.0.0</a><br> |
| <a class="dropdown-option" href="/versions/0.12.1/">0.12.1</a><br> |
| <a class="dropdown-option" href="/versions/0.11.0/">0.11.0</a> |
| </div> |
| </div> |
| </div> |
| </nav> |
| </div> |
| </header> |
| <div class="mdl-layout mdl-js-layout mdl-layout--fixed-header mdl-layout--fixed-drawer"><header class="mdl-layout__header mdl-layout__header--waterfall "> |
| <div class="mdl-layout__header-row"> |
| |
| <nav class="mdl-navigation breadcrumb"> |
| <a class="mdl-navigation__link" href="../../index.html">Python API</a><i class="material-icons">navigate_next</i> |
| <a class="mdl-navigation__link" href="../index.html">mxnet.gluon</a><i class="material-icons">navigate_next</i> |
| <a class="mdl-navigation__link is-active">gluon.contrib</a> |
| </nav> |
| <div class="mdl-layout-spacer"></div> |
| <nav class="mdl-navigation"> |
| |
| <form class="form-inline pull-sm-right" action="../../../search.html" method="get"> |
| <div class="mdl-textfield mdl-js-textfield mdl-textfield--expandable mdl-textfield--floating-label mdl-textfield--align-right"> |
| <label id="quick-search-icon" class="mdl-button mdl-js-button mdl-button--icon" for="waterfall-exp"> |
| <i class="material-icons">search</i> |
| </label> |
| <div class="mdl-textfield__expandable-holder"> |
| <input class="mdl-textfield__input" type="text" name="q" id="waterfall-exp" placeholder="Search" /> |
| <input type="hidden" name="check_keywords" value="yes" /> |
| <input type="hidden" name="area" value="default" /> |
| </div> |
| </div> |
| <div class="mdl-tooltip" data-mdl-for="quick-search-icon"> |
| Quick search |
| </div> |
| </form> |
| |
| <a id="button-show-source" |
| class="mdl-button mdl-js-button mdl-button--icon" |
| href="../../../_sources/api/gluon/contrib/index.rst" rel="nofollow"> |
| <i class="material-icons">code</i> |
| </a> |
| <div class="mdl-tooltip" data-mdl-for="button-show-source"> |
| Show Source |
| </div> |
| </nav> |
| </div> |
| <div class="mdl-layout__header-row header-links"> |
| <div class="mdl-layout-spacer"></div> |
| <nav class="mdl-navigation"> |
| </nav> |
| </div> |
| </header><header class="mdl-layout__drawer"> |
| |
| <div class="globaltoc"> |
| <span class="mdl-layout-title toc">Table Of Contents</span> |
| |
| |
| |
| <nav class="mdl-navigation"> |
| <ul class="current"> |
| <li class="toctree-l1"><a class="reference internal" href="../../../tutorials/index.html">Python Tutorials</a><ul> |
| <li class="toctree-l2"><a class="reference internal" href="../../../tutorials/getting-started/index.html">Getting Started</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/index.html">Crash Course</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/1-ndarray.html">Manipulate data with <code class="docutils literal notranslate"><span class="pre">ndarray</span></code></a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/2-nn.html">Create a neural network</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/3-autograd.html">Automatic differentiation with <code class="docutils literal notranslate"><span class="pre">autograd</span></code></a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/4-train.html">Train the neural network</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/5-predict.html">Predict with a pre-trained model</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/6-use_gpus.html">Use GPUs</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/getting-started/to-mxnet/index.html">Moving to MXNet from Other Frameworks</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/to-mxnet/pytorch.html">PyTorch vs Apache MXNet</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/getting-started/gluon_from_experiment_to_deployment.html">Gluon: from experiment to deployment</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/getting-started/logistic_regression_explained.html">Logistic regression explained</a></li> |
| <li class="toctree-l3"><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/image/mnist.html">MNIST</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../../tutorials/packages/index.html">Packages</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/autograd/index.html">Automatic Differentiation</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/gluon/index.html">Gluon</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/index.html">Blocks</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/custom-layer.html">Custom Layers</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/custom_layer_beginners.html">Customer Layers (Beginners)</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/hybridize.html">Hybridize</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/init.html">Initialization</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/naming.html">Parameter and Block Naming</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/nn.html">Layers and Blocks</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/parameters.html">Parameter Management</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/activations/activations.html">Activation Blocks</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/data/index.html">Data Tutorials</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/data_augmentation.html">Image Augmentation</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/data_augmentation.html#Spatial-Augmentation">Spatial Augmentation</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/data_augmentation.html#Color-Augmentation">Color Augmentation</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/data_augmentation.html#Composed-Augmentations">Composed Augmentations</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/datasets.html">Gluon <code class="docutils literal notranslate"><span class="pre">Dataset</span></code>s and <code class="docutils literal notranslate"><span class="pre">DataLoader</span></code></a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/datasets.html#Using-own-data-with-included-Datasets">Using own data with included <code class="docutils literal notranslate"><span class="pre">Dataset</span></code>s</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/datasets.html#Using-own-data-with-custom-Datasets">Using own data with custom <code class="docutils literal notranslate"><span class="pre">Dataset</span></code>s</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/datasets.html#Appendix:-Upgrading-from-Module-DataIter-to-Gluon-DataLoader">Appendix: Upgrading from Module <code class="docutils literal notranslate"><span class="pre">DataIter</span></code> to Gluon <code class="docutils literal notranslate"><span class="pre">DataLoader</span></code></a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/image/index.html">Image Tutorials</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/image/image-augmentation.html">Image Augmentation</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/image/info_gan.html">Image similarity search with InfoGAN</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/image/mnist.html">Handwritten Digit Recognition</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/image/pretrained_models.html">Using pre-trained models in MXNet</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/loss/index.html">Losses</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/loss/custom-loss.html">Custom Loss Blocks</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/loss/kl_divergence.html">Kullback-Leibler (KL) Divergence</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/loss/loss.html">Loss functions</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/text/index.html">Text Tutorials</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/text/gnmt.html">Google Neural Machine Translation</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/text/transformer.html">Machine Translation with Transformer</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/training/index.html">Training</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/training/fit_api_tutorial.html">MXNet Gluon Fit API</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/training/trainer.html">Trainer</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/training/learning_rates/index.html">Learning Rates</a><ul> |
| <li class="toctree-l6"><a class="reference internal" href="../../../tutorials/packages/gluon/training/learning_rates/learning_rate_finder.html">Learning Rate Finder</a></li> |
| <li class="toctree-l6"><a class="reference internal" href="../../../tutorials/packages/gluon/training/learning_rates/learning_rate_schedules.html">Learning Rate Schedules</a></li> |
| <li class="toctree-l6"><a class="reference internal" href="../../../tutorials/packages/gluon/training/learning_rates/learning_rate_schedules_advanced.html">Advanced Learning Rate Schedules</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/training/normalization/index.html">Normalization Blocks</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/kvstore/index.html">KVStore</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/kvstore/kvstore.html">Distributed Key-Value Store</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/ndarray/index.html">NDArray</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/ndarray/01-ndarray-intro.html">An Intro: Manipulate Data the MXNet Way with NDArray</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/ndarray/02-ndarray-operations.html">NDArray Operations</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/ndarray/03-ndarray-contexts.html">NDArray Contexts</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/ndarray/gotchas_numpy_in_mxnet.html">Gotchas using NumPy in Apache MXNet</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/ndarray/sparse/index.html">Tutorials</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/ndarray/sparse/csr.html">CSRNDArray - NDArray in Compressed Sparse Row Storage Format</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/ndarray/sparse/row_sparse.html">RowSparseNDArray - NDArray for Sparse Gradient Updates</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/ndarray/sparse/train.html">Train a Linear Regression Model with Sparse Symbols</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/ndarray/sparse/train_gluon.html">Sparse NDArrays with Gluon</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/onnx/index.html">ONNX</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/onnx/fine_tuning_gluon.html">Fine-tuning an ONNX model</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/onnx/inference_on_onnx_model.html">Running inference on MXNet/Gluon from an ONNX model</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/onnx/super_resolution.html">Importing an ONNX model into MXNet</a></li> |
| <li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/deploy/export/onnx.html">Export ONNX Models</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/optimizer/index.html">Optimizers</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/viz/index.html">Visualization</a><ul> |
| <li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/faq/visualize_graph">Visualize networks</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../../tutorials/performance/index.html">Performance</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/performance/compression/index.html">Compression</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/compression/int8.html">Deploy with int-8</a></li> |
| <li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/faq/float16">Float16</a></li> |
| <li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/faq/gradient_compression">Gradient Compression</a></li> |
| <li class="toctree-l4"><a class="reference external" href="https://gluon-cv.mxnet.io/build/examples_deployment/int8_inference.html">GluonCV with Quantized Models</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/performance/backend/index.html">Accelerated Backend Tools</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/backend/mkldnn/index.html">Intel MKL-DNN</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/performance/backend/mkldnn/mkldnn_quantization.html">Quantize with MKL-DNN backend</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/performance/backend/mkldnn/mkldnn_readme.html">Install MXNet with MKL-DNN</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/backend/tensorrt/index.html">TensorRT</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/performance/backend/tensorrt/tensorrt.html">Optimizing Deep Learning Computation Graphs with TensorRT</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/backend/tvm.html">Use TVM</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/backend/profiler.html">Profiling MXNet Models</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/backend/amp.html">Using AMP: Automatic Mixed Precision</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../../tutorials/deploy/index.html">Deployment</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/deploy/export/index.html">Export</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/export/onnx.html">Exporting to ONNX format</a></li> |
| <li class="toctree-l4"><a class="reference external" href="https://gluon-cv.mxnet.io/build/examples_deployment/export_network.html">Export Gluon CV Models</a></li> |
| <li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Save / Load Parameters</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/deploy/inference/index.html">Inference</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/inference/cpp.html">Deploy into C++</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/inference/image_classification_jetson.html">Image Classication using pretrained ResNet-50 model on Jetson module</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/inference/scala.html">Deploy into a Java or Scala Environment</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/inference/wine_detector.html">Real-time Object Detection with MXNet On The Raspberry Pi</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/deploy/run-on-aws/index.html">Run on AWS</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/run-on-aws/use_ec2.html">Run on an EC2 Instance</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/run-on-aws/use_sagemaker.html">Run on Amazon SageMaker</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/run-on-aws/cloud.html">MXNet on the Cloud</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../../tutorials/extend/index.html">Extend</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/extend/custom_layer.html">Custom Layers</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/extend/customop.html">Custom Numpy Operators</a></li> |
| <li class="toctree-l3"><a class="reference external" href="https://mxnet.apache.org/api/faq/new_op">New Operator Creation</a></li> |
| <li class="toctree-l3"><a class="reference external" href="https://mxnet.apache.org/api/faq/add_op_in_backend">New Operator in MXNet Backend</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l1 current"><a class="reference internal" href="../../index.html">Python API</a><ul class="current"> |
| <li class="toctree-l2"><a class="reference internal" href="../../ndarray/index.html">mxnet.ndarray</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/ndarray.html">ndarray</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/contrib/index.html">ndarray.contrib</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/image/index.html">ndarray.image</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/linalg/index.html">ndarray.linalg</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/op/index.html">ndarray.op</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/random/index.html">ndarray.random</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/register/index.html">ndarray.register</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/sparse/index.html">ndarray.sparse</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/utils/index.html">ndarray.utils</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l2 current"><a class="reference internal" href="../index.html">mxnet.gluon</a><ul class="current"> |
| <li class="toctree-l3"><a class="reference internal" href="../block.html">gluon.Block</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../hybrid_block.html">gluon.HybridBlock</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../symbol_block.html">gluon.SymbolBlock</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../constant.html">gluon.Constant</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../parameter.html">gluon.Parameter</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../parameter_dict.html">gluon.ParameterDict</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../trainer.html">gluon.Trainer</a></li> |
| <li class="toctree-l3 current"><a class="current reference internal" href="#">gluon.contrib</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../data/index.html">gluon.data</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../data/vision/index.html">data.vision</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../data/vision/datasets/index.html">vision.datasets</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../data/vision/transforms/index.html">vision.transforms</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../loss/index.html">gluon.loss</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../model_zoo/index.html">gluon.model_zoo.vision</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../nn/index.html">gluon.nn</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../rnn/index.html">gluon.rnn</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../utils/index.html">gluon.utils</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../autograd/index.html">mxnet.autograd</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../initializer/index.html">mxnet.initializer</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../optimizer/index.html">mxnet.optimizer</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../lr_scheduler/index.html">mxnet.lr_scheduler</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../metric/index.html">mxnet.metric</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../kvstore/index.html">mxnet.kvstore</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../symbol/index.html">mxnet.symbol</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/symbol.html">symbol</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/contrib/index.html">symbol.contrib</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/image/index.html">symbol.image</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/linalg/index.html">symbol.linalg</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/op/index.html">symbol.op</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/random/index.html">symbol.random</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/register/index.html">symbol.register</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/sparse/index.html">symbol.sparse</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../module/index.html">mxnet.module</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../contrib/index.html">mxnet.contrib</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/autograd/index.html">contrib.autograd</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/io/index.html">contrib.io</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/ndarray/index.html">contrib.ndarray</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/onnx/index.html">contrib.onnx</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/quantization/index.html">contrib.quantization</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/symbol/index.html">contrib.symbol</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/tensorboard/index.html">contrib.tensorboard</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/tensorrt/index.html">contrib.tensorrt</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/text/index.html">contrib.text</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../mxnet/index.html">mxnet</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/attribute/index.html">mxnet.attribute</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/base/index.html">mxnet.base</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/callback/index.html">mxnet.callback</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/context/index.html">mxnet.context</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/engine/index.html">mxnet.engine</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/executor/index.html">mxnet.executor</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/executor_manager/index.html">mxnet.executor_manager</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/image/index.html">mxnet.image</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/io/index.html">mxnet.io</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/kvstore_server/index.html">mxnet.kvstore_server</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/libinfo/index.html">mxnet.libinfo</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/log/index.html">mxnet.log</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/model/index.html">mxnet.model</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/monitor/index.html">mxnet.monitor</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/name/index.html">mxnet.name</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/notebook/index.html">mxnet.notebook</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/operator/index.html">mxnet.operator</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/profiler/index.html">mxnet.profiler</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/random/index.html">mxnet.random</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/recordio/index.html">mxnet.recordio</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/registry/index.html">mxnet.registry</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/rtc/index.html">mxnet.rtc</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/runtime/index.html">mxnet.runtime</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/test_utils/index.html">mxnet.test_utils</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/torch/index.html">mxnet.torch</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/util/index.html">mxnet.util</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/visualization/index.html">mxnet.visualization</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| </ul> |
| |
| </nav> |
| |
| </div> |
| |
| </header> |
| <main class="mdl-layout__content" tabIndex="0"> |
| |
| <script type="text/javascript" src="../../../_static/sphinx_materialdesign_theme.js "></script> |
| <script type="text/javascript" src="../../../_static/feedback.js"></script> |
| <header class="mdl-layout__drawer"> |
| |
| <div class="globaltoc"> |
| <span class="mdl-layout-title toc">Table Of Contents</span> |
| |
| |
| |
| <nav class="mdl-navigation"> |
| <ul class="current"> |
| <li class="toctree-l1"><a class="reference internal" href="../../../tutorials/index.html">Python Tutorials</a><ul> |
| <li class="toctree-l2"><a class="reference internal" href="../../../tutorials/getting-started/index.html">Getting Started</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/index.html">Crash Course</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/1-ndarray.html">Manipulate data with <code class="docutils literal notranslate"><span class="pre">ndarray</span></code></a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/2-nn.html">Create a neural network</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/3-autograd.html">Automatic differentiation with <code class="docutils literal notranslate"><span class="pre">autograd</span></code></a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/4-train.html">Train the neural network</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/5-predict.html">Predict with a pre-trained model</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/6-use_gpus.html">Use GPUs</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/getting-started/to-mxnet/index.html">Moving to MXNet from Other Frameworks</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/to-mxnet/pytorch.html">PyTorch vs Apache MXNet</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/getting-started/gluon_from_experiment_to_deployment.html">Gluon: from experiment to deployment</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/getting-started/logistic_regression_explained.html">Logistic regression explained</a></li> |
| <li class="toctree-l3"><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/image/mnist.html">MNIST</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../../tutorials/packages/index.html">Packages</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/autograd/index.html">Automatic Differentiation</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/gluon/index.html">Gluon</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/index.html">Blocks</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/custom-layer.html">Custom Layers</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/custom_layer_beginners.html">Customer Layers (Beginners)</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/hybridize.html">Hybridize</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/init.html">Initialization</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/naming.html">Parameter and Block Naming</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/nn.html">Layers and Blocks</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/parameters.html">Parameter Management</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/activations/activations.html">Activation Blocks</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/data/index.html">Data Tutorials</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/data_augmentation.html">Image Augmentation</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/data_augmentation.html#Spatial-Augmentation">Spatial Augmentation</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/data_augmentation.html#Color-Augmentation">Color Augmentation</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/data_augmentation.html#Composed-Augmentations">Composed Augmentations</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/datasets.html">Gluon <code class="docutils literal notranslate"><span class="pre">Dataset</span></code>s and <code class="docutils literal notranslate"><span class="pre">DataLoader</span></code></a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/datasets.html#Using-own-data-with-included-Datasets">Using own data with included <code class="docutils literal notranslate"><span class="pre">Dataset</span></code>s</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/datasets.html#Using-own-data-with-custom-Datasets">Using own data with custom <code class="docutils literal notranslate"><span class="pre">Dataset</span></code>s</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/datasets.html#Appendix:-Upgrading-from-Module-DataIter-to-Gluon-DataLoader">Appendix: Upgrading from Module <code class="docutils literal notranslate"><span class="pre">DataIter</span></code> to Gluon <code class="docutils literal notranslate"><span class="pre">DataLoader</span></code></a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/image/index.html">Image Tutorials</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/image/image-augmentation.html">Image Augmentation</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/image/info_gan.html">Image similarity search with InfoGAN</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/image/mnist.html">Handwritten Digit Recognition</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/image/pretrained_models.html">Using pre-trained models in MXNet</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/loss/index.html">Losses</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/loss/custom-loss.html">Custom Loss Blocks</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/loss/kl_divergence.html">Kullback-Leibler (KL) Divergence</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/loss/loss.html">Loss functions</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/text/index.html">Text Tutorials</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/text/gnmt.html">Google Neural Machine Translation</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/text/transformer.html">Machine Translation with Transformer</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/training/index.html">Training</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/training/fit_api_tutorial.html">MXNet Gluon Fit API</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/training/trainer.html">Trainer</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/training/learning_rates/index.html">Learning Rates</a><ul> |
| <li class="toctree-l6"><a class="reference internal" href="../../../tutorials/packages/gluon/training/learning_rates/learning_rate_finder.html">Learning Rate Finder</a></li> |
| <li class="toctree-l6"><a class="reference internal" href="../../../tutorials/packages/gluon/training/learning_rates/learning_rate_schedules.html">Learning Rate Schedules</a></li> |
| <li class="toctree-l6"><a class="reference internal" href="../../../tutorials/packages/gluon/training/learning_rates/learning_rate_schedules_advanced.html">Advanced Learning Rate Schedules</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/training/normalization/index.html">Normalization Blocks</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/kvstore/index.html">KVStore</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/kvstore/kvstore.html">Distributed Key-Value Store</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/ndarray/index.html">NDArray</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/ndarray/01-ndarray-intro.html">An Intro: Manipulate Data the MXNet Way with NDArray</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/ndarray/02-ndarray-operations.html">NDArray Operations</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/ndarray/03-ndarray-contexts.html">NDArray Contexts</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/ndarray/gotchas_numpy_in_mxnet.html">Gotchas using NumPy in Apache MXNet</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/ndarray/sparse/index.html">Tutorials</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/ndarray/sparse/csr.html">CSRNDArray - NDArray in Compressed Sparse Row Storage Format</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/ndarray/sparse/row_sparse.html">RowSparseNDArray - NDArray for Sparse Gradient Updates</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/ndarray/sparse/train.html">Train a Linear Regression Model with Sparse Symbols</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/ndarray/sparse/train_gluon.html">Sparse NDArrays with Gluon</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/onnx/index.html">ONNX</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/onnx/fine_tuning_gluon.html">Fine-tuning an ONNX model</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/onnx/inference_on_onnx_model.html">Running inference on MXNet/Gluon from an ONNX model</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/onnx/super_resolution.html">Importing an ONNX model into MXNet</a></li> |
| <li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/deploy/export/onnx.html">Export ONNX Models</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/optimizer/index.html">Optimizers</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/viz/index.html">Visualization</a><ul> |
| <li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/faq/visualize_graph">Visualize networks</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../../tutorials/performance/index.html">Performance</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/performance/compression/index.html">Compression</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/compression/int8.html">Deploy with int-8</a></li> |
| <li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/faq/float16">Float16</a></li> |
| <li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/faq/gradient_compression">Gradient Compression</a></li> |
| <li class="toctree-l4"><a class="reference external" href="https://gluon-cv.mxnet.io/build/examples_deployment/int8_inference.html">GluonCV with Quantized Models</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/performance/backend/index.html">Accelerated Backend Tools</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/backend/mkldnn/index.html">Intel MKL-DNN</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/performance/backend/mkldnn/mkldnn_quantization.html">Quantize with MKL-DNN backend</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/performance/backend/mkldnn/mkldnn_readme.html">Install MXNet with MKL-DNN</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/backend/tensorrt/index.html">TensorRT</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/performance/backend/tensorrt/tensorrt.html">Optimizing Deep Learning Computation Graphs with TensorRT</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/backend/tvm.html">Use TVM</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/backend/profiler.html">Profiling MXNet Models</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/backend/amp.html">Using AMP: Automatic Mixed Precision</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../../tutorials/deploy/index.html">Deployment</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/deploy/export/index.html">Export</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/export/onnx.html">Exporting to ONNX format</a></li> |
| <li class="toctree-l4"><a class="reference external" href="https://gluon-cv.mxnet.io/build/examples_deployment/export_network.html">Export Gluon CV Models</a></li> |
| <li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Save / Load Parameters</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/deploy/inference/index.html">Inference</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/inference/cpp.html">Deploy into C++</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/inference/image_classification_jetson.html">Image Classication using pretrained ResNet-50 model on Jetson module</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/inference/scala.html">Deploy into a Java or Scala Environment</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/inference/wine_detector.html">Real-time Object Detection with MXNet On The Raspberry Pi</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/deploy/run-on-aws/index.html">Run on AWS</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/run-on-aws/use_ec2.html">Run on an EC2 Instance</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/run-on-aws/use_sagemaker.html">Run on Amazon SageMaker</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/run-on-aws/cloud.html">MXNet on the Cloud</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../../tutorials/extend/index.html">Extend</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/extend/custom_layer.html">Custom Layers</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/extend/customop.html">Custom Numpy Operators</a></li> |
| <li class="toctree-l3"><a class="reference external" href="https://mxnet.apache.org/api/faq/new_op">New Operator Creation</a></li> |
| <li class="toctree-l3"><a class="reference external" href="https://mxnet.apache.org/api/faq/add_op_in_backend">New Operator in MXNet Backend</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l1 current"><a class="reference internal" href="../../index.html">Python API</a><ul class="current"> |
| <li class="toctree-l2"><a class="reference internal" href="../../ndarray/index.html">mxnet.ndarray</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/ndarray.html">ndarray</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/contrib/index.html">ndarray.contrib</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/image/index.html">ndarray.image</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/linalg/index.html">ndarray.linalg</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/op/index.html">ndarray.op</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/random/index.html">ndarray.random</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/register/index.html">ndarray.register</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/sparse/index.html">ndarray.sparse</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/utils/index.html">ndarray.utils</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l2 current"><a class="reference internal" href="../index.html">mxnet.gluon</a><ul class="current"> |
| <li class="toctree-l3"><a class="reference internal" href="../block.html">gluon.Block</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../hybrid_block.html">gluon.HybridBlock</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../symbol_block.html">gluon.SymbolBlock</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../constant.html">gluon.Constant</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../parameter.html">gluon.Parameter</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../parameter_dict.html">gluon.ParameterDict</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../trainer.html">gluon.Trainer</a></li> |
| <li class="toctree-l3 current"><a class="current reference internal" href="#">gluon.contrib</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../data/index.html">gluon.data</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../data/vision/index.html">data.vision</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../data/vision/datasets/index.html">vision.datasets</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../data/vision/transforms/index.html">vision.transforms</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../loss/index.html">gluon.loss</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../model_zoo/index.html">gluon.model_zoo.vision</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../nn/index.html">gluon.nn</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../rnn/index.html">gluon.rnn</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../utils/index.html">gluon.utils</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../autograd/index.html">mxnet.autograd</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../initializer/index.html">mxnet.initializer</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../optimizer/index.html">mxnet.optimizer</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../lr_scheduler/index.html">mxnet.lr_scheduler</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../metric/index.html">mxnet.metric</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../kvstore/index.html">mxnet.kvstore</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../symbol/index.html">mxnet.symbol</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/symbol.html">symbol</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/contrib/index.html">symbol.contrib</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/image/index.html">symbol.image</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/linalg/index.html">symbol.linalg</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/op/index.html">symbol.op</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/random/index.html">symbol.random</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/register/index.html">symbol.register</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/sparse/index.html">symbol.sparse</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../module/index.html">mxnet.module</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../contrib/index.html">mxnet.contrib</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/autograd/index.html">contrib.autograd</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/io/index.html">contrib.io</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/ndarray/index.html">contrib.ndarray</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/onnx/index.html">contrib.onnx</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/quantization/index.html">contrib.quantization</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/symbol/index.html">contrib.symbol</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/tensorboard/index.html">contrib.tensorboard</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/tensorrt/index.html">contrib.tensorrt</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/text/index.html">contrib.text</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../mxnet/index.html">mxnet</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/attribute/index.html">mxnet.attribute</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/base/index.html">mxnet.base</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/callback/index.html">mxnet.callback</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/context/index.html">mxnet.context</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/engine/index.html">mxnet.engine</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/executor/index.html">mxnet.executor</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/executor_manager/index.html">mxnet.executor_manager</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/image/index.html">mxnet.image</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/io/index.html">mxnet.io</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/kvstore_server/index.html">mxnet.kvstore_server</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/libinfo/index.html">mxnet.libinfo</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/log/index.html">mxnet.log</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/model/index.html">mxnet.model</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/monitor/index.html">mxnet.monitor</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/name/index.html">mxnet.name</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/notebook/index.html">mxnet.notebook</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/operator/index.html">mxnet.operator</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/profiler/index.html">mxnet.profiler</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/random/index.html">mxnet.random</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/recordio/index.html">mxnet.recordio</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/registry/index.html">mxnet.registry</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/rtc/index.html">mxnet.rtc</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/runtime/index.html">mxnet.runtime</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/test_utils/index.html">mxnet.test_utils</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/torch/index.html">mxnet.torch</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/util/index.html">mxnet.util</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/visualization/index.html">mxnet.visualization</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| </ul> |
| |
| </nav> |
| |
| </div> |
| |
| </header> |
| |
| <div class="document"> |
| <div class="page-content" role="main"> |
| |
| <div class="section" id="gluon-contrib"> |
| <h1>gluon.contrib<a class="headerlink" href="#gluon-contrib" title="Permalink to this headline">¶</a></h1> |
| <p>This document lists the contrib APIs in Gluon:</p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#module-mxnet.gluon.contrib" title="mxnet.gluon.contrib"><code class="xref py py-obj docutils literal notranslate"><span class="pre">mxnet.gluon.contrib</span></code></a></p></td> |
| <td><p>Contrib neural network module.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <p>The <cite>Gluon Contrib</cite> API, defined in the <cite>gluon.contrib</cite> package, provides |
| many useful experimental APIs for new features. |
| This is a place for the community to try out the new features, |
| so that feature contributors can receive feedback.</p> |
| <div class="admonition warning"> |
| <p class="admonition-title">Warning</p> |
| <p>This package contains experimental APIs and may change in the near future.</p> |
| </div> |
| <p>In the rest of this document, we list routines provided by the <cite>gluon.contrib</cite> package.</p> |
| <div class="section" id="neural-network"> |
| <h2>Neural Network<a class="headerlink" href="#neural-network" title="Permalink to this headline">¶</a></h2> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.contrib.nn.Concurrent" title="mxnet.gluon.contrib.nn.Concurrent"><code class="xref py py-obj docutils literal notranslate"><span class="pre">Concurrent</span></code></a></p></td> |
| <td><p>Lays <cite>Block</cite> s concurrently.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.contrib.nn.HybridConcurrent" title="mxnet.gluon.contrib.nn.HybridConcurrent"><code class="xref py py-obj docutils literal notranslate"><span class="pre">HybridConcurrent</span></code></a></p></td> |
| <td><p>Lays <cite>HybridBlock</cite> s concurrently.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.contrib.nn.Identity" title="mxnet.gluon.contrib.nn.Identity"><code class="xref py py-obj docutils literal notranslate"><span class="pre">Identity</span></code></a></p></td> |
| <td><p>Block that passes through the input directly.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.contrib.nn.SparseEmbedding" title="mxnet.gluon.contrib.nn.SparseEmbedding"><code class="xref py py-obj docutils literal notranslate"><span class="pre">SparseEmbedding</span></code></a></p></td> |
| <td><p>Turns non-negative integers (indexes/tokens) into dense vectors of fixed size.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.contrib.nn.SyncBatchNorm" title="mxnet.gluon.contrib.nn.SyncBatchNorm"><code class="xref py py-obj docutils literal notranslate"><span class="pre">SyncBatchNorm</span></code></a></p></td> |
| <td><p>Cross-GPU Synchronized Batch normalization (SyncBN)</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.contrib.nn.PixelShuffle1D" title="mxnet.gluon.contrib.nn.PixelShuffle1D"><code class="xref py py-obj docutils literal notranslate"><span class="pre">PixelShuffle1D</span></code></a></p></td> |
| <td><p>Pixel-shuffle layer for upsampling in 1 dimension.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.contrib.nn.PixelShuffle2D" title="mxnet.gluon.contrib.nn.PixelShuffle2D"><code class="xref py py-obj docutils literal notranslate"><span class="pre">PixelShuffle2D</span></code></a></p></td> |
| <td><p>Pixel-shuffle layer for upsampling in 2 dimensions.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.contrib.nn.PixelShuffle3D" title="mxnet.gluon.contrib.nn.PixelShuffle3D"><code class="xref py py-obj docutils literal notranslate"><span class="pre">PixelShuffle3D</span></code></a></p></td> |
| <td><p>Pixel-shuffle layer for upsampling in 3 dimensions.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| </div> |
| <div class="section" id="convolutional-neural-network"> |
| <h2>Convolutional Neural Network<a class="headerlink" href="#convolutional-neural-network" title="Permalink to this headline">¶</a></h2> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.contrib.cnn.DeformableConvolution" title="mxnet.gluon.contrib.cnn.DeformableConvolution"><code class="xref py py-obj docutils literal notranslate"><span class="pre">DeformableConvolution</span></code></a></p></td> |
| <td><p>2-D Deformable Convolution v_1 (Dai, 2017).</p></td> |
| </tr> |
| </tbody> |
| </table> |
| </div> |
| <div class="section" id="recurrent-neural-network"> |
| <h2>Recurrent Neural Network<a class="headerlink" href="#recurrent-neural-network" title="Permalink to this headline">¶</a></h2> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.contrib.rnn.VariationalDropoutCell" title="mxnet.gluon.contrib.rnn.VariationalDropoutCell"><code class="xref py py-obj docutils literal notranslate"><span class="pre">VariationalDropoutCell</span></code></a></p></td> |
| <td><p>Applies Variational Dropout on base cell.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.contrib.rnn.Conv1DRNNCell" title="mxnet.gluon.contrib.rnn.Conv1DRNNCell"><code class="xref py py-obj docutils literal notranslate"><span class="pre">Conv1DRNNCell</span></code></a></p></td> |
| <td><p>1D Convolutional RNN cell.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.contrib.rnn.Conv2DRNNCell" title="mxnet.gluon.contrib.rnn.Conv2DRNNCell"><code class="xref py py-obj docutils literal notranslate"><span class="pre">Conv2DRNNCell</span></code></a></p></td> |
| <td><p>2D Convolutional RNN cell.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.contrib.rnn.Conv3DRNNCell" title="mxnet.gluon.contrib.rnn.Conv3DRNNCell"><code class="xref py py-obj docutils literal notranslate"><span class="pre">Conv3DRNNCell</span></code></a></p></td> |
| <td><p>3D Convolutional RNN cells</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.contrib.rnn.Conv1DLSTMCell" title="mxnet.gluon.contrib.rnn.Conv1DLSTMCell"><code class="xref py py-obj docutils literal notranslate"><span class="pre">Conv1DLSTMCell</span></code></a></p></td> |
| <td><p>1D Convolutional LSTM network cell.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.contrib.rnn.Conv2DLSTMCell" title="mxnet.gluon.contrib.rnn.Conv2DLSTMCell"><code class="xref py py-obj docutils literal notranslate"><span class="pre">Conv2DLSTMCell</span></code></a></p></td> |
| <td><p>2D Convolutional LSTM network cell.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.contrib.rnn.Conv3DLSTMCell" title="mxnet.gluon.contrib.rnn.Conv3DLSTMCell"><code class="xref py py-obj docutils literal notranslate"><span class="pre">Conv3DLSTMCell</span></code></a></p></td> |
| <td><p>3D Convolutional LSTM network cell.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.contrib.rnn.Conv1DGRUCell" title="mxnet.gluon.contrib.rnn.Conv1DGRUCell"><code class="xref py py-obj docutils literal notranslate"><span class="pre">Conv1DGRUCell</span></code></a></p></td> |
| <td><p>1D Convolutional Gated Rectified Unit (GRU) network cell.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.contrib.rnn.Conv2DGRUCell" title="mxnet.gluon.contrib.rnn.Conv2DGRUCell"><code class="xref py py-obj docutils literal notranslate"><span class="pre">Conv2DGRUCell</span></code></a></p></td> |
| <td><p>2D Convolutional Gated Rectified Unit (GRU) network cell.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.contrib.rnn.Conv3DGRUCell" title="mxnet.gluon.contrib.rnn.Conv3DGRUCell"><code class="xref py py-obj docutils literal notranslate"><span class="pre">Conv3DGRUCell</span></code></a></p></td> |
| <td><p>3D Convolutional Gated Rectified Unit (GRU) network cell.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.contrib.rnn.LSTMPCell" title="mxnet.gluon.contrib.rnn.LSTMPCell"><code class="xref py py-obj docutils literal notranslate"><span class="pre">LSTMPCell</span></code></a></p></td> |
| <td><p>Long-Short Term Memory Projected (LSTMP) network cell.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| </div> |
| <div class="section" id="data"> |
| <h2>Data<a class="headerlink" href="#data" title="Permalink to this headline">¶</a></h2> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.contrib.data.sampler.IntervalSampler" title="mxnet.gluon.contrib.data.sampler.IntervalSampler"><code class="xref py py-obj docutils literal notranslate"><span class="pre">IntervalSampler</span></code></a></p></td> |
| <td><p>Samples elements from [0, length) at fixed intervals.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| </div> |
| <div class="section" id="text-dataset"> |
| <h2>Text Dataset<a class="headerlink" href="#text-dataset" title="Permalink to this headline">¶</a></h2> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.contrib.data.text.WikiText2" title="mxnet.gluon.contrib.data.text.WikiText2"><code class="xref py py-obj docutils literal notranslate"><span class="pre">WikiText2</span></code></a></p></td> |
| <td><p>WikiText-2 word-level dataset for language modeling, from Salesforce research.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.contrib.data.text.WikiText103" title="mxnet.gluon.contrib.data.text.WikiText103"><code class="xref py py-obj docutils literal notranslate"><span class="pre">WikiText103</span></code></a></p></td> |
| <td><p>WikiText-103 word-level dataset for language modeling, from Salesforce research.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| </div> |
| <div class="section" id="estimator"> |
| <h2>Estimator<a class="headerlink" href="#estimator" title="Permalink to this headline">¶</a></h2> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.contrib.estimator.Estimator" title="mxnet.gluon.contrib.estimator.Estimator"><code class="xref py py-obj docutils literal notranslate"><span class="pre">Estimator</span></code></a></p></td> |
| <td><p>Estimator Class for easy model training</p></td> |
| </tr> |
| </tbody> |
| </table> |
| </div> |
| <div class="section" id="event-handler"> |
| <h2>Event Handler<a class="headerlink" href="#event-handler" title="Permalink to this headline">¶</a></h2> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.contrib.estimator.StoppingHandler" title="mxnet.gluon.contrib.estimator.StoppingHandler"><code class="xref py py-obj docutils literal notranslate"><span class="pre">StoppingHandler</span></code></a></p></td> |
| <td><p>Stop conditions to stop training Stop training if maximum number of batches or epochs reached.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.contrib.estimator.MetricHandler" title="mxnet.gluon.contrib.estimator.MetricHandler"><code class="xref py py-obj docutils literal notranslate"><span class="pre">MetricHandler</span></code></a></p></td> |
| <td><p>Metric Handler that update metric values at batch end</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.contrib.estimator.ValidationHandler" title="mxnet.gluon.contrib.estimator.ValidationHandler"><code class="xref py py-obj docutils literal notranslate"><span class="pre">ValidationHandler</span></code></a></p></td> |
| <td><p>Validation Handler that evaluate model on validation dataset</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.contrib.estimator.LoggingHandler" title="mxnet.gluon.contrib.estimator.LoggingHandler"><code class="xref py py-obj docutils literal notranslate"><span class="pre">LoggingHandler</span></code></a></p></td> |
| <td><p>Basic Logging Handler that applies to every Gluon estimator by default.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.contrib.estimator.CheckpointHandler" title="mxnet.gluon.contrib.estimator.CheckpointHandler"><code class="xref py py-obj docutils literal notranslate"><span class="pre">CheckpointHandler</span></code></a></p></td> |
| <td><p>Save the model after user define period</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.contrib.estimator.EarlyStoppingHandler" title="mxnet.gluon.contrib.estimator.EarlyStoppingHandler"><code class="xref py py-obj docutils literal notranslate"><span class="pre">EarlyStoppingHandler</span></code></a></p></td> |
| <td><p>Early stop training if monitored value is not improving</p></td> |
| </tr> |
| </tbody> |
| </table> |
| </div> |
| <div class="section" id="module-mxnet.gluon.contrib"> |
| <span id="api-reference"></span><h2>API Reference<a class="headerlink" href="#module-mxnet.gluon.contrib" title="Permalink to this headline">¶</a></h2> |
| <p>Contrib neural network module.</p> |
| <span class="target" id="module-mxnet.gluon.contrib.nn"></span><p>Contributed neural network modules.</p> |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.nn.Concurrent"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.contrib.nn.</code><code class="sig-name descname">Concurrent</code><span class="sig-paren">(</span><em class="sig-param">axis=-1</em>, <em class="sig-param">prefix=None</em>, <em class="sig-param">params=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/nn/basic_layers.html#Concurrent"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.nn.Concurrent" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.nn.basic_layers.Sequential</span></code></p> |
| <p>Lays <cite>Block</cite> s concurrently.</p> |
| <p>This block feeds its input to all children blocks, and |
| produce the output by concatenating all the children blocks’ outputs |
| on the specified axis.</p> |
| <p>Example:</p> |
| <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">net</span> <span class="o">=</span> <span class="n">Concurrent</span><span class="p">()</span> |
| <span class="c1"># use net's name_scope to give children blocks appropriate names.</span> |
| <span class="k">with</span> <span class="n">net</span><span class="o">.</span><span class="n">name_scope</span><span class="p">():</span> |
| <span class="n">net</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">10</span><span class="p">,</span> <span class="n">activation</span><span class="o">=</span><span class="s1">'relu'</span><span class="p">))</span> |
| <span class="n">net</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">))</span> |
| <span class="n">net</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">Identity</span><span class="p">())</span> |
| </pre></div> |
| </div> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><p><strong>axis</strong> (<em>int</em><em>, </em><em>default -1</em>) – The axis on which to concatenate the outputs.</p> |
| </dd> |
| </dl> |
| <dl class="method"> |
| <dt id="mxnet.gluon.contrib.nn.Concurrent.forward"> |
| <code class="sig-name descname">forward</code><span class="sig-paren">(</span><em class="sig-param">x</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/nn/basic_layers.html#Concurrent.forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.nn.Concurrent.forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to implement forward computation using <code class="xref py py-class docutils literal notranslate"><span class="pre">NDArray</span></code>. Only |
| accepts positional arguments.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><p><strong>*args</strong> (<em>list of NDArray</em>) – Input tensors.</p> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.nn.HybridConcurrent"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.contrib.nn.</code><code class="sig-name descname">HybridConcurrent</code><span class="sig-paren">(</span><em class="sig-param">axis=-1</em>, <em class="sig-param">prefix=None</em>, <em class="sig-param">params=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/nn/basic_layers.html#HybridConcurrent"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.nn.HybridConcurrent" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.nn.basic_layers.HybridSequential</span></code></p> |
| <p>Lays <cite>HybridBlock</cite> s concurrently.</p> |
| <p>This block feeds its input to all children blocks, and |
| produce the output by concatenating all the children blocks’ outputs |
| on the specified axis.</p> |
| <p>Example:</p> |
| <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">net</span> <span class="o">=</span> <span class="n">HybridConcurrent</span><span class="p">()</span> |
| <span class="c1"># use net's name_scope to give children blocks appropriate names.</span> |
| <span class="k">with</span> <span class="n">net</span><span class="o">.</span><span class="n">name_scope</span><span class="p">():</span> |
| <span class="n">net</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">10</span><span class="p">,</span> <span class="n">activation</span><span class="o">=</span><span class="s1">'relu'</span><span class="p">))</span> |
| <span class="n">net</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">))</span> |
| <span class="n">net</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">Identity</span><span class="p">())</span> |
| </pre></div> |
| </div> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><p><strong>axis</strong> (<em>int</em><em>, </em><em>default -1</em>) – The axis on which to concatenate the outputs.</p> |
| </dd> |
| </dl> |
| <dl class="method"> |
| <dt id="mxnet.gluon.contrib.nn.HybridConcurrent.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">x</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/nn/basic_layers.html#HybridConcurrent.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.nn.HybridConcurrent.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.nn.Identity"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.contrib.nn.</code><code class="sig-name descname">Identity</code><span class="sig-paren">(</span><em class="sig-param">prefix=None</em>, <em class="sig-param">params=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/nn/basic_layers.html#Identity"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.nn.Identity" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.block.HybridBlock</span></code></p> |
| <p>Block that passes through the input directly.</p> |
| <p>This block can be used in conjunction with HybridConcurrent |
| block for residual connection.</p> |
| <p>Example:</p> |
| <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">net</span> <span class="o">=</span> <span class="n">HybridConcurrent</span><span class="p">()</span> |
| <span class="c1"># use net's name_scope to give child Blocks appropriate names.</span> |
| <span class="k">with</span> <span class="n">net</span><span class="o">.</span><span class="n">name_scope</span><span class="p">():</span> |
| <span class="n">net</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">10</span><span class="p">,</span> <span class="n">activation</span><span class="o">=</span><span class="s1">'relu'</span><span class="p">))</span> |
| <span class="n">net</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">))</span> |
| <span class="n">net</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">Identity</span><span class="p">())</span> |
| </pre></div> |
| </div> |
| <dl class="method"> |
| <dt id="mxnet.gluon.contrib.nn.Identity.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">x</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/nn/basic_layers.html#Identity.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.nn.Identity.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.nn.SparseEmbedding"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.contrib.nn.</code><code class="sig-name descname">SparseEmbedding</code><span class="sig-paren">(</span><em class="sig-param">input_dim</em>, <em class="sig-param">output_dim</em>, <em class="sig-param">dtype='float32'</em>, <em class="sig-param">weight_initializer=None</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/nn/basic_layers.html#SparseEmbedding"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.nn.SparseEmbedding" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.block.Block</span></code></p> |
| <p>Turns non-negative integers (indexes/tokens) into dense vectors |
| of fixed size. eg. [4, 20] -> [[0.25, 0.1], [0.6, -0.2]]</p> |
| <p>This SparseBlock is designed for distributed training with extremely large |
| input dimension. Both weight and gradient w.r.t. weight are <cite>RowSparseNDArray</cite>.</p> |
| <p>Note: if <cite>sparse_grad</cite> is set to True, the gradient w.r.t weight will be |
| sparse. Only a subset of optimizers support sparse gradients, including SGD, AdaGrad |
| and Adam. By default lazy updates is turned on, which may perform differently |
| from standard updates. For more details, please check the Optimization API at: |
| <a class="reference external" href="https://mxnet.incubator.apache.org/api/python/optimization/optimization.html">https://mxnet.incubator.apache.org/api/python/optimization/optimization.html</a></p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>input_dim</strong> (<em>int</em>) – Size of the vocabulary, i.e. maximum integer index + 1.</p></li> |
| <li><p><strong>output_dim</strong> (<em>int</em>) – Dimension of the dense embedding.</p></li> |
| <li><p><strong>dtype</strong> (<em>str</em><em> or </em><em>np.dtype</em><em>, </em><em>default 'float32'</em>) – Data type of output embeddings.</p></li> |
| <li><p><strong>weight_initializer</strong> (<a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Initializer for the <cite>embeddings</cite> matrix.</p></li> |
| <li><p><strong>Inputs</strong> – <ul> |
| <li><p><strong>data</strong>: (N-1)-D tensor with shape: <cite>(x1, x2, …, xN-1)</cite>.</p></li> |
| </ul> |
| </p></li> |
| <li><p><strong>Output</strong> – <ul> |
| <li><p><strong>out</strong>: N-D tensor with shape: <cite>(x1, x2, …, xN-1, output_dim)</cite>.</p></li> |
| </ul> |
| </p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="method"> |
| <dt id="mxnet.gluon.contrib.nn.SparseEmbedding.forward"> |
| <code class="sig-name descname">forward</code><span class="sig-paren">(</span><em class="sig-param">x</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/nn/basic_layers.html#SparseEmbedding.forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.nn.SparseEmbedding.forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to implement forward computation using <code class="xref py py-class docutils literal notranslate"><span class="pre">NDArray</span></code>. Only |
| accepts positional arguments.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><p><strong>*args</strong> (<em>list of NDArray</em>) – Input tensors.</p> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.nn.SyncBatchNorm"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.contrib.nn.</code><code class="sig-name descname">SyncBatchNorm</code><span class="sig-paren">(</span><em class="sig-param">in_channels=0</em>, <em class="sig-param">num_devices=None</em>, <em class="sig-param">momentum=0.9</em>, <em class="sig-param">epsilon=1e-05</em>, <em class="sig-param">center=True</em>, <em class="sig-param">scale=True</em>, <em class="sig-param">use_global_stats=False</em>, <em class="sig-param">beta_initializer='zeros'</em>, <em class="sig-param">gamma_initializer='ones'</em>, <em class="sig-param">running_mean_initializer='zeros'</em>, <em class="sig-param">running_variance_initializer='ones'</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/nn/basic_layers.html#SyncBatchNorm"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.nn.SyncBatchNorm" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.nn.basic_layers.BatchNorm</span></code></p> |
| <p>Cross-GPU Synchronized Batch normalization (SyncBN)</p> |
| <p>Standard BN <a class="footnote-reference brackets" href="#id3" id="id1">1</a> implementation only normalize the data within each device. |
| SyncBN normalizes the input within the whole mini-batch. |
| We follow the implementation described in the paper <a class="footnote-reference brackets" href="#id4" id="id2">2</a>.</p> |
| <p>Note: Current implementation of SyncBN does not support FP16 training. |
| For FP16 inference, use standard nn.BatchNorm instead of SyncBN.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>in_channels</strong> (<em>int</em><em>, </em><em>default 0</em>) – Number of channels (feature maps) in input data. If not specified, |
| initialization will be deferred to the first time <cite>forward</cite> is called |
| and <cite>in_channels</cite> will be inferred from the shape of input data.</p></li> |
| <li><p><strong>num_devices</strong> (<em>int</em><em>, </em><em>default number of visible GPUs</em>) – </p></li> |
| <li><p><strong>momentum</strong> (<em>float</em><em>, </em><em>default 0.9</em>) – Momentum for the moving average.</p></li> |
| <li><p><strong>epsilon</strong> (<em>float</em><em>, </em><em>default 1e-5</em>) – Small float added to variance to avoid dividing by zero.</p></li> |
| <li><p><strong>center</strong> (<em>bool</em><em>, </em><em>default True</em>) – If True, add offset of <cite>beta</cite> to normalized tensor. |
| If False, <cite>beta</cite> is ignored.</p></li> |
| <li><p><strong>scale</strong> (<em>bool</em><em>, </em><em>default True</em>) – If True, multiply by <cite>gamma</cite>. If False, <cite>gamma</cite> is not used. |
| When the next layer is linear (also e.g. <cite>nn.relu</cite>), |
| this can be disabled since the scaling |
| will be done by the next layer.</p></li> |
| <li><p><strong>use_global_stats</strong> (<em>bool</em><em>, </em><em>default False</em>) – If True, use global moving statistics instead of local batch-norm. This will force |
| change batch-norm into a scale shift operator. |
| If False, use local batch-norm.</p></li> |
| <li><p><strong>beta_initializer</strong> (str or <cite>Initializer</cite>, default ‘zeros’) – Initializer for the beta weight.</p></li> |
| <li><p><strong>gamma_initializer</strong> (str or <cite>Initializer</cite>, default ‘ones’) – Initializer for the gamma weight.</p></li> |
| <li><p><strong>running_mean_initializer</strong> (str or <cite>Initializer</cite>, default ‘zeros’) – Initializer for the running mean.</p></li> |
| <li><p><strong>running_variance_initializer</strong> (str or <cite>Initializer</cite>, default ‘ones’) – Initializer for the running variance.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>data</strong>: input tensor with arbitrary shape.</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul class="simple"> |
| <li><p><strong>out</strong>: output tensor with the same shape as <cite>data</cite>.</p></li> |
| </ul> |
| </dd> |
| <dt>Reference:</dt><dd><dl class="footnote brackets"> |
| <dt class="label" id="id3"><span class="brackets"><a class="fn-backref" href="#id1">1</a></span></dt> |
| <dd><p>Ioffe, Sergey, and Christian Szegedy. “Batch normalization: Accelerating deep network training by reducing internal covariate shift.” <em>ICML 2015</em></p> |
| </dd> |
| <dt class="label" id="id4"><span class="brackets"><a class="fn-backref" href="#id2">2</a></span></dt> |
| <dd><p>Hang Zhang, Kristin Dana, Jianping Shi, Zhongyue Zhang, Xiaogang Wang, Ambrish Tyagi, and Amit Agrawal. “Context Encoding for Semantic Segmentation.” <em>CVPR 2018</em></p> |
| </dd> |
| </dl> |
| </dd> |
| </dl> |
| <dl class="method"> |
| <dt id="mxnet.gluon.contrib.nn.SyncBatchNorm.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">x</em>, <em class="sig-param">gamma</em>, <em class="sig-param">beta</em>, <em class="sig-param">running_mean</em>, <em class="sig-param">running_var</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/nn/basic_layers.html#SyncBatchNorm.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.nn.SyncBatchNorm.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.nn.PixelShuffle1D"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.contrib.nn.</code><code class="sig-name descname">PixelShuffle1D</code><span class="sig-paren">(</span><em class="sig-param">factor</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/nn/basic_layers.html#PixelShuffle1D"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.nn.PixelShuffle1D" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.block.HybridBlock</span></code></p> |
| <p>Pixel-shuffle layer for upsampling in 1 dimension.</p> |
| <p>Pixel-shuffling is the operation of taking groups of values along |
| the <em>channel</em> dimension and regrouping them into blocks of pixels |
| along the <code class="docutils literal notranslate"><span class="pre">W</span></code> dimension, thereby effectively multiplying that dimension |
| by a constant factor in size.</p> |
| <p>For example, a feature map of shape <span class="math notranslate nohighlight">\((fC, W)\)</span> is reshaped |
| into <span class="math notranslate nohighlight">\((C, fW)\)</span> by forming little value groups of size <span class="math notranslate nohighlight">\(f\)</span> |
| and arranging them in a grid of size <span class="math notranslate nohighlight">\(W\)</span>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>factor</strong> (<em>int</em><em> or </em><em>1-tuple of int</em>) – Upsampling factor, applied to the <code class="docutils literal notranslate"><span class="pre">W</span></code> dimension.</p></li> |
| <li><p><strong>Inputs</strong> – <ul> |
| <li><p><strong>data</strong>: Tensor of shape <code class="docutils literal notranslate"><span class="pre">(N,</span> <span class="pre">f*C,</span> <span class="pre">W)</span></code>.</p></li> |
| </ul> |
| </p></li> |
| <li><p><strong>Outputs</strong> – <ul> |
| <li><p><strong>out</strong>: Tensor of shape <code class="docutils literal notranslate"><span class="pre">(N,</span> <span class="pre">C,</span> <span class="pre">W*f)</span></code>.</p></li> |
| </ul> |
| </p></li> |
| </ul> |
| </dd> |
| </dl> |
| <p class="rubric">Examples</p> |
| <div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="n">pxshuf</span> <span class="o">=</span> <span class="n">PixelShuffle1D</span><span class="p">(</span><span class="mi">2</span><span class="p">)</span> |
| <span class="gp">>>> </span><span class="n">x</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">zeros</span><span class="p">((</span><span class="mi">1</span><span class="p">,</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">3</span><span class="p">))</span> |
| <span class="gp">>>> </span><span class="n">pxshuf</span><span class="p">(</span><span class="n">x</span><span class="p">)</span><span class="o">.</span><span class="n">shape</span> |
| <span class="go">(1, 4, 6)</span> |
| </pre></div> |
| </div> |
| <dl class="method"> |
| <dt id="mxnet.gluon.contrib.nn.PixelShuffle1D.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">x</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/nn/basic_layers.html#PixelShuffle1D.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.nn.PixelShuffle1D.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Perform pixel-shuffling on the input.</p> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.nn.PixelShuffle2D"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.contrib.nn.</code><code class="sig-name descname">PixelShuffle2D</code><span class="sig-paren">(</span><em class="sig-param">factor</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/nn/basic_layers.html#PixelShuffle2D"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.nn.PixelShuffle2D" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.block.HybridBlock</span></code></p> |
| <p>Pixel-shuffle layer for upsampling in 2 dimensions.</p> |
| <p>Pixel-shuffling is the operation of taking groups of values along |
| the <em>channel</em> dimension and regrouping them into blocks of pixels |
| along the <code class="docutils literal notranslate"><span class="pre">H</span></code> and <code class="docutils literal notranslate"><span class="pre">W</span></code> dimensions, thereby effectively multiplying |
| those dimensions by a constant factor in size.</p> |
| <p>For example, a feature map of shape <span class="math notranslate nohighlight">\((f^2 C, H, W)\)</span> is reshaped |
| into <span class="math notranslate nohighlight">\((C, fH, fW)\)</span> by forming little <span class="math notranslate nohighlight">\(f \times f\)</span> blocks |
| of pixels and arranging them in an <span class="math notranslate nohighlight">\(H \times W\)</span> grid.</p> |
| <p>Pixel-shuffling together with regular convolution is an alternative, |
| learnable way of upsampling an image by arbitrary factors. It is reported |
| to help overcome checkerboard artifacts that are common in upsampling with |
| transposed convolutions (also called deconvolutions). See the paper |
| <a class="reference external" href="https://arxiv.org/abs/1609.05158">Real-Time Single Image and Video Super-Resolution Using an Efficient |
| Sub-Pixel Convolutional Neural Network</a> |
| for further details.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>factor</strong> (<em>int</em><em> or </em><em>2-tuple of int</em>) – Upsampling factors, applied to the <code class="docutils literal notranslate"><span class="pre">H</span></code> and <code class="docutils literal notranslate"><span class="pre">W</span></code> dimensions, |
| in that order.</p></li> |
| <li><p><strong>Inputs</strong> – <ul> |
| <li><p><strong>data</strong>: Tensor of shape <code class="docutils literal notranslate"><span class="pre">(N,</span> <span class="pre">f1*f2*C,</span> <span class="pre">H,</span> <span class="pre">W)</span></code>.</p></li> |
| </ul> |
| </p></li> |
| <li><p><strong>Outputs</strong> – <ul> |
| <li><p><strong>out</strong>: Tensor of shape <code class="docutils literal notranslate"><span class="pre">(N,</span> <span class="pre">C,</span> <span class="pre">H*f1,</span> <span class="pre">W*f2)</span></code>.</p></li> |
| </ul> |
| </p></li> |
| </ul> |
| </dd> |
| </dl> |
| <p class="rubric">Examples</p> |
| <div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="n">pxshuf</span> <span class="o">=</span> <span class="n">PixelShuffle2D</span><span class="p">((</span><span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">))</span> |
| <span class="gp">>>> </span><span class="n">x</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">zeros</span><span class="p">((</span><span class="mi">1</span><span class="p">,</span> <span class="mi">12</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">5</span><span class="p">))</span> |
| <span class="gp">>>> </span><span class="n">pxshuf</span><span class="p">(</span><span class="n">x</span><span class="p">)</span><span class="o">.</span><span class="n">shape</span> |
| <span class="go">(1, 2, 6, 15)</span> |
| </pre></div> |
| </div> |
| <dl class="method"> |
| <dt id="mxnet.gluon.contrib.nn.PixelShuffle2D.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">x</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/nn/basic_layers.html#PixelShuffle2D.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.nn.PixelShuffle2D.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Perform pixel-shuffling on the input.</p> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.nn.PixelShuffle3D"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.contrib.nn.</code><code class="sig-name descname">PixelShuffle3D</code><span class="sig-paren">(</span><em class="sig-param">factor</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/nn/basic_layers.html#PixelShuffle3D"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.nn.PixelShuffle3D" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.block.HybridBlock</span></code></p> |
| <p>Pixel-shuffle layer for upsampling in 3 dimensions.</p> |
| <p>Pixel-shuffling (or voxel-shuffling in 3D) is the operation of taking |
| groups of values along the <em>channel</em> dimension and regrouping them into |
| blocks of voxels along the <code class="docutils literal notranslate"><span class="pre">D</span></code>, <code class="docutils literal notranslate"><span class="pre">H</span></code> and <code class="docutils literal notranslate"><span class="pre">W</span></code> dimensions, thereby |
| effectively multiplying those dimensions by a constant factor in size.</p> |
| <p>For example, a feature map of shape <span class="math notranslate nohighlight">\((f^3 C, D, H, W)\)</span> is reshaped |
| into <span class="math notranslate nohighlight">\((C, fD, fH, fW)\)</span> by forming little <span class="math notranslate nohighlight">\(f \times f \times f\)</span> |
| blocks of voxels and arranging them in a <span class="math notranslate nohighlight">\(D \times H \times W\)</span> grid.</p> |
| <p>Pixel-shuffling together with regular convolution is an alternative, |
| learnable way of upsampling an image by arbitrary factors. It is reported |
| to help overcome checkerboard artifacts that are common in upsampling with |
| transposed convolutions (also called deconvolutions). See the paper |
| <a class="reference external" href="https://arxiv.org/abs/1609.05158">Real-Time Single Image and Video Super-Resolution Using an Efficient |
| Sub-Pixel Convolutional Neural Network</a> |
| for further details.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>factor</strong> (<em>int</em><em> or </em><em>3-tuple of int</em>) – Upsampling factors, applied to the <code class="docutils literal notranslate"><span class="pre">D</span></code>, <code class="docutils literal notranslate"><span class="pre">H</span></code> and <code class="docutils literal notranslate"><span class="pre">W</span></code> |
| dimensions, in that order.</p></li> |
| <li><p><strong>Inputs</strong> – <ul> |
| <li><p><strong>data</strong>: Tensor of shape <code class="docutils literal notranslate"><span class="pre">(N,</span> <span class="pre">f1*f2*f3*C,</span> <span class="pre">D,</span> <span class="pre">H,</span> <span class="pre">W)</span></code>.</p></li> |
| </ul> |
| </p></li> |
| <li><p><strong>Outputs</strong> – <ul> |
| <li><p><strong>out</strong>: Tensor of shape <code class="docutils literal notranslate"><span class="pre">(N,</span> <span class="pre">C,</span> <span class="pre">D*f1,</span> <span class="pre">H*f2,</span> <span class="pre">W*f3)</span></code>.</p></li> |
| </ul> |
| </p></li> |
| </ul> |
| </dd> |
| </dl> |
| <p class="rubric">Examples</p> |
| <div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="n">pxshuf</span> <span class="o">=</span> <span class="n">PixelShuffle3D</span><span class="p">((</span><span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">4</span><span class="p">))</span> |
| <span class="gp">>>> </span><span class="n">x</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">zeros</span><span class="p">((</span><span class="mi">1</span><span class="p">,</span> <span class="mi">48</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">5</span><span class="p">,</span> <span class="mi">7</span><span class="p">))</span> |
| <span class="gp">>>> </span><span class="n">pxshuf</span><span class="p">(</span><span class="n">x</span><span class="p">)</span><span class="o">.</span><span class="n">shape</span> |
| <span class="go">(1, 2, 6, 15, 28)</span> |
| </pre></div> |
| </div> |
| <dl class="method"> |
| <dt id="mxnet.gluon.contrib.nn.PixelShuffle3D.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">x</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/nn/basic_layers.html#PixelShuffle3D.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.nn.PixelShuffle3D.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Perform pixel-shuffling on the input.</p> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <span class="target" id="module-mxnet.gluon.contrib.cnn"></span><p>Contrib convolutional neural network module.</p> |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.cnn.DeformableConvolution"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.contrib.cnn.</code><code class="sig-name descname">DeformableConvolution</code><span class="sig-paren">(</span><em class="sig-param">channels</em>, <em class="sig-param">kernel_size=(1</em>, <em class="sig-param">1)</em>, <em class="sig-param">strides=(1</em>, <em class="sig-param">1)</em>, <em class="sig-param">padding=(0</em>, <em class="sig-param">0)</em>, <em class="sig-param">dilation=(1</em>, <em class="sig-param">1)</em>, <em class="sig-param">groups=1</em>, <em class="sig-param">num_deformable_group=1</em>, <em class="sig-param">layout='NCHW'</em>, <em class="sig-param">use_bias=True</em>, <em class="sig-param">in_channels=0</em>, <em class="sig-param">activation=None</em>, <em class="sig-param">weight_initializer=None</em>, <em class="sig-param">bias_initializer='zeros'</em>, <em class="sig-param">offset_weight_initializer='zeros'</em>, <em class="sig-param">offset_bias_initializer='zeros'</em>, <em class="sig-param">offset_use_bias=True</em>, <em class="sig-param">op_name='DeformableConvolution'</em>, <em class="sig-param">adj=None</em>, <em class="sig-param">prefix=None</em>, <em class="sig-param">params=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/cnn/conv_layers.html#DeformableConvolution"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.cnn.DeformableConvolution" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.block.HybridBlock</span></code></p> |
| <p>2-D Deformable Convolution v_1 (Dai, 2017). |
| Normal Convolution uses sampling points in a regular grid, while the sampling |
| points of Deformablem Convolution can be offset. The offset is learned with a |
| separate convolution layer during the training. Both the convolution layer for |
| generating the output features and the offsets are included in this gluon layer.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>channels</strong> (<em>int</em><em>,</em>) – The dimensionality of the output space |
| i.e. the number of output channels in the convolution.</p></li> |
| <li><p><strong>kernel_size</strong> (<em>int</em><em> or </em><em>tuple/list of 2 ints</em><em>, </em><em>(</em><em>Default value =</em><em> (</em><em>1</em><em>,</em><em>1</em><em>)</em><em>)</em>) – Specifies the dimensions of the convolution window.</p></li> |
| <li><p><strong>strides</strong> (<em>int</em><em> or </em><em>tuple/list of 2 ints</em><em>, </em><em>(</em><em>Default value =</em><em> (</em><em>1</em><em>,</em><em>1</em><em>)</em><em>)</em>) – Specifies the strides of the convolution.</p></li> |
| <li><p><strong>padding</strong> (<em>int</em><em> or </em><em>tuple/list of 2 ints</em><em>, </em><em>(</em><em>Default value =</em><em> (</em><em>0</em><em>,</em><em>0</em><em>)</em><em>)</em>) – If padding is non-zero, then the input is implicitly zero-padded |
| on both sides for padding number of points.</p></li> |
| <li><p><strong>dilation</strong> (<em>int</em><em> or </em><em>tuple/list of 2 ints</em><em>, </em><em>(</em><em>Default value =</em><em> (</em><em>1</em><em>,</em><em>1</em><em>)</em><em>)</em>) – Specifies the dilation rate to use for dilated convolution.</p></li> |
| <li><p><strong>groups</strong> (<em>int</em><em>, </em><em>(</em><em>Default value = 1</em><em>)</em>) – Controls the connections between inputs and outputs. |
| At groups=1, all inputs are convolved to all outputs. |
| At groups=2, the operation becomes equivalent to having two convolution |
| layers side by side, each seeing half the input channels, and producing |
| half the output channels, and both subsequently concatenated.</p></li> |
| <li><p><strong>num_deformable_group</strong> (<em>int</em><em>, </em><em>(</em><em>Default value = 1</em><em>)</em>) – Number of deformable group partitions.</p></li> |
| <li><p><strong>layout</strong> (<em>str</em><em>, </em><em>(</em><em>Default value = NCHW</em><em>)</em>) – Dimension ordering of data and weight. Can be ‘NCW’, ‘NWC’, ‘NCHW’, |
| ‘NHWC’, ‘NCDHW’, ‘NDHWC’, etc. ‘N’, ‘C’, ‘H’, ‘W’, ‘D’ stands for |
| batch, channel, height, width and depth dimensions respectively. |
| Convolution is performed over ‘D’, ‘H’, and ‘W’ dimensions.</p></li> |
| <li><p><strong>use_bias</strong> (<em>bool</em><em>, </em><em>(</em><em>Default value = True</em><em>)</em>) – Whether the layer for generating the output features uses a bias vector.</p></li> |
| <li><p><strong>in_channels</strong> (<em>int</em><em>, </em><em>(</em><em>Default value = 0</em><em>)</em>) – The number of input channels to this layer. If not specified, |
| initialization will be deferred to the first time <cite>forward</cite> is called |
| and input channels will be inferred from the shape of input data.</p></li> |
| <li><p><strong>activation</strong> (<em>str</em><em>, </em><em>(</em><em>Default value = None</em><em>)</em>) – Activation function to use. See <a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.Activation" title="mxnet.ndarray.Activation"><code class="xref py py-func docutils literal notranslate"><span class="pre">Activation()</span></code></a>. |
| If you don’t specify anything, no activation is applied |
| (ie. “linear” activation: <cite>a(x) = x</cite>).</p></li> |
| <li><p><strong>weight_initializer</strong> (str or <cite>Initializer</cite>, (Default value = None)) – Initializer for the <cite>weight</cite> weights matrix for the convolution layer |
| for generating the output features.</p></li> |
| <li><p><strong>bias_initializer</strong> (str or <cite>Initializer</cite>, (Default value = zeros)) – Initializer for the bias vector for the convolution layer |
| for generating the output features.</p></li> |
| <li><p><strong>offset_weight_initializer</strong> (str or <cite>Initializer</cite>, (Default value = zeros)) – Initializer for the <cite>weight</cite> weights matrix for the convolution layer |
| for generating the offset.</p></li> |
| <li><p><strong>offset_bias_initializer</strong> (str or <cite>Initializer</cite>, (Default value = zeros),) – Initializer for the bias vector for the convolution layer |
| for generating the offset.</p></li> |
| <li><p><strong>offset_use_bias</strong> (<em>bool</em><em>, </em><em>(</em><em>Default value = True</em><em>)</em>) – Whether the layer for generating the offset uses a bias vector.</p></li> |
| <li><p><strong>Inputs</strong> – <ul> |
| <li><p><strong>data</strong>: 4D input tensor with shape |
| <cite>(batch_size, in_channels, height, width)</cite> when <cite>layout</cite> is <cite>NCHW</cite>. |
| For other layouts shape is permuted accordingly.</p></li> |
| </ul> |
| </p></li> |
| <li><p><strong>Outputs</strong> – <ul> |
| <li><p><strong>out</strong>: 4D output tensor with shape |
| <cite>(batch_size, channels, out_height, out_width)</cite> when <cite>layout</cite> is <cite>NCHW</cite>. |
| out_height and out_width are calculated as:</p> |
| <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">out_height</span> <span class="o">=</span> <span class="n">floor</span><span class="p">((</span><span class="n">height</span><span class="o">+</span><span class="mi">2</span><span class="o">*</span><span class="n">padding</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">-</span><span class="n">dilation</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">*</span><span class="p">(</span><span class="n">kernel_size</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">-</span><span class="mi">1</span><span class="p">)</span><span class="o">-</span><span class="mi">1</span><span class="p">)</span><span class="o">/</span><span class="n">stride</span><span class="p">[</span><span class="mi">0</span><span class="p">])</span><span class="o">+</span><span class="mi">1</span> |
| <span class="n">out_width</span> <span class="o">=</span> <span class="n">floor</span><span class="p">((</span><span class="n">width</span><span class="o">+</span><span class="mi">2</span><span class="o">*</span><span class="n">padding</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span><span class="o">-</span><span class="n">dilation</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span><span class="o">*</span><span class="p">(</span><span class="n">kernel_size</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span><span class="o">-</span><span class="mi">1</span><span class="p">)</span><span class="o">-</span><span class="mi">1</span><span class="p">)</span><span class="o">/</span><span class="n">stride</span><span class="p">[</span><span class="mi">1</span><span class="p">])</span><span class="o">+</span><span class="mi">1</span> |
| </pre></div> |
| </div> |
| </li> |
| </ul> |
| </p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="method"> |
| <dt id="mxnet.gluon.contrib.cnn.DeformableConvolution.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">x</em>, <em class="sig-param">offset_weight</em>, <em class="sig-param">deformable_conv_weight</em>, <em class="sig-param">offset_bias=None</em>, <em class="sig-param">deformable_conv_bias=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/cnn/conv_layers.html#DeformableConvolution.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.cnn.DeformableConvolution.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.cnn.ModulatedDeformableConvolution"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.contrib.cnn.</code><code class="sig-name descname">ModulatedDeformableConvolution</code><span class="sig-paren">(</span><em class="sig-param">channels</em>, <em class="sig-param">kernel_size=(1</em>, <em class="sig-param">1)</em>, <em class="sig-param">strides=(1</em>, <em class="sig-param">1)</em>, <em class="sig-param">padding=(0</em>, <em class="sig-param">0)</em>, <em class="sig-param">dilation=(1</em>, <em class="sig-param">1)</em>, <em class="sig-param">groups=1</em>, <em class="sig-param">num_deformable_group=1</em>, <em class="sig-param">layout='NCHW'</em>, <em class="sig-param">use_bias=True</em>, <em class="sig-param">in_channels=0</em>, <em class="sig-param">activation=None</em>, <em class="sig-param">weight_initializer=None</em>, <em class="sig-param">bias_initializer='zeros'</em>, <em class="sig-param">offset_weight_initializer='zeros'</em>, <em class="sig-param">offset_bias_initializer='zeros'</em>, <em class="sig-param">offset_use_bias=True</em>, <em class="sig-param">op_name='ModulatedDeformableConvolution'</em>, <em class="sig-param">adj=None</em>, <em class="sig-param">prefix=None</em>, <em class="sig-param">params=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/cnn/conv_layers.html#ModulatedDeformableConvolution"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.cnn.ModulatedDeformableConvolution" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.block.HybridBlock</span></code></p> |
| <p>2-D Deformable Convolution v2 (Dai, 2018).</p> |
| <p>The modulated deformable convolution operation is described in <a class="reference external" href="https://arxiv.org/abs/1811.11168">https://arxiv.org/abs/1811.11168</a></p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>channels</strong> (<em>int</em><em>,</em>) – The dimensionality of the output space |
| i.e. the number of output channels in the convolution.</p></li> |
| <li><p><strong>kernel_size</strong> (<em>int</em><em> or </em><em>tuple/list of 2 ints</em><em>, </em><em>(</em><em>Default value =</em><em> (</em><em>1</em><em>,</em><em>1</em><em>)</em><em>)</em>) – Specifies the dimensions of the convolution window.</p></li> |
| <li><p><strong>strides</strong> (<em>int</em><em> or </em><em>tuple/list of 2 ints</em><em>, </em><em>(</em><em>Default value =</em><em> (</em><em>1</em><em>,</em><em>1</em><em>)</em><em>)</em>) – Specifies the strides of the convolution.</p></li> |
| <li><p><strong>padding</strong> (<em>int</em><em> or </em><em>tuple/list of 2 ints</em><em>, </em><em>(</em><em>Default value =</em><em> (</em><em>0</em><em>,</em><em>0</em><em>)</em><em>)</em>) – If padding is non-zero, then the input is implicitly zero-padded |
| on both sides for padding number of points.</p></li> |
| <li><p><strong>dilation</strong> (<em>int</em><em> or </em><em>tuple/list of 2 ints</em><em>, </em><em>(</em><em>Default value =</em><em> (</em><em>1</em><em>,</em><em>1</em><em>)</em><em>)</em>) – Specifies the dilation rate to use for dilated convolution.</p></li> |
| <li><p><strong>groups</strong> (<em>int</em><em>, </em><em>(</em><em>Default value = 1</em><em>)</em>) – Controls the connections between inputs and outputs. |
| At groups=1, all inputs are convolved to all outputs. |
| At groups=2, the operation becomes equivalent to having two convolution |
| layers side by side, each seeing half the input channels, and producing |
| half the output channels, and both subsequently concatenated.</p></li> |
| <li><p><strong>num_deformable_group</strong> (<em>int</em><em>, </em><em>(</em><em>Default value = 1</em><em>)</em>) – Number of deformable group partitions.</p></li> |
| <li><p><strong>layout</strong> (<em>str</em><em>, </em><em>(</em><em>Default value = NCHW</em><em>)</em>) – Dimension ordering of data and weight. Can be ‘NCW’, ‘NWC’, ‘NCHW’, |
| ‘NHWC’, ‘NCDHW’, ‘NDHWC’, etc. ‘N’, ‘C’, ‘H’, ‘W’, ‘D’ stands for |
| batch, channel, height, width and depth dimensions respectively. |
| Convolution is performed over ‘D’, ‘H’, and ‘W’ dimensions.</p></li> |
| <li><p><strong>use_bias</strong> (<em>bool</em><em>, </em><em>(</em><em>Default value = True</em><em>)</em>) – Whether the layer for generating the output features uses a bias vector.</p></li> |
| <li><p><strong>in_channels</strong> (<em>int</em><em>, </em><em>(</em><em>Default value = 0</em><em>)</em>) – The number of input channels to this layer. If not specified, |
| initialization will be deferred to the first time <cite>forward</cite> is called |
| and input channels will be inferred from the shape of input data.</p></li> |
| <li><p><strong>activation</strong> (<em>str</em><em>, </em><em>(</em><em>Default value = None</em><em>)</em>) – Activation function to use. See <a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.Activation" title="mxnet.ndarray.Activation"><code class="xref py py-func docutils literal notranslate"><span class="pre">Activation()</span></code></a>. |
| If you don’t specify anything, no activation is applied |
| (ie. “linear” activation: <cite>a(x) = x</cite>).</p></li> |
| <li><p><strong>weight_initializer</strong> (str or <cite>Initializer</cite>, (Default value = None)) – Initializer for the <cite>weight</cite> weights matrix for the convolution layer |
| for generating the output features.</p></li> |
| <li><p><strong>bias_initializer</strong> (str or <cite>Initializer</cite>, (Default value = zeros)) – Initializer for the bias vector for the convolution layer |
| for generating the output features.</p></li> |
| <li><p><strong>offset_weight_initializer</strong> (str or <cite>Initializer</cite>, (Default value = zeros)) – Initializer for the <cite>weight</cite> weights matrix for the convolution layer |
| for generating the offset.</p></li> |
| <li><p><strong>offset_bias_initializer</strong> (str or <cite>Initializer</cite>, (Default value = zeros),) – Initializer for the bias vector for the convolution layer |
| for generating the offset.</p></li> |
| <li><p><strong>offset_use_bias</strong> (<em>bool</em><em>, </em><em>(</em><em>Default value = True</em><em>)</em>) – Whether the layer for generating the offset uses a bias vector.</p></li> |
| <li><p><strong>Inputs</strong> – <ul> |
| <li><p><strong>data</strong>: 4D input tensor with shape |
| <cite>(batch_size, in_channels, height, width)</cite> when <cite>layout</cite> is <cite>NCHW</cite>. |
| For other layouts shape is permuted accordingly.</p></li> |
| </ul> |
| </p></li> |
| <li><p><strong>Outputs</strong> – <ul> |
| <li><p><strong>out</strong>: 4D output tensor with shape |
| <cite>(batch_size, channels, out_height, out_width)</cite> when <cite>layout</cite> is <cite>NCHW</cite>. |
| out_height and out_width are calculated as:</p> |
| <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">out_height</span> <span class="o">=</span> <span class="n">floor</span><span class="p">((</span><span class="n">height</span><span class="o">+</span><span class="mi">2</span><span class="o">*</span><span class="n">padding</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">-</span><span class="n">dilation</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">*</span><span class="p">(</span><span class="n">kernel_size</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">-</span><span class="mi">1</span><span class="p">)</span><span class="o">-</span><span class="mi">1</span><span class="p">)</span><span class="o">/</span><span class="n">stride</span><span class="p">[</span><span class="mi">0</span><span class="p">])</span><span class="o">+</span><span class="mi">1</span> |
| <span class="n">out_width</span> <span class="o">=</span> <span class="n">floor</span><span class="p">((</span><span class="n">width</span><span class="o">+</span><span class="mi">2</span><span class="o">*</span><span class="n">padding</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span><span class="o">-</span><span class="n">dilation</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span><span class="o">*</span><span class="p">(</span><span class="n">kernel_size</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span><span class="o">-</span><span class="mi">1</span><span class="p">)</span><span class="o">-</span><span class="mi">1</span><span class="p">)</span><span class="o">/</span><span class="n">stride</span><span class="p">[</span><span class="mi">1</span><span class="p">])</span><span class="o">+</span><span class="mi">1</span> |
| </pre></div> |
| </div> |
| </li> |
| </ul> |
| </p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="method"> |
| <dt id="mxnet.gluon.contrib.cnn.ModulatedDeformableConvolution.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">x</em>, <em class="sig-param">offset_weight</em>, <em class="sig-param">deformable_conv_weight</em>, <em class="sig-param">offset_bias=None</em>, <em class="sig-param">deformable_conv_bias=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/cnn/conv_layers.html#ModulatedDeformableConvolution.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.cnn.ModulatedDeformableConvolution.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <span class="target" id="module-mxnet.gluon.contrib.rnn"></span><p>Contrib recurrent neural network module.</p> |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.rnn.Conv1DRNNCell"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.contrib.rnn.</code><code class="sig-name descname">Conv1DRNNCell</code><span class="sig-paren">(</span><em class="sig-param">input_shape</em>, <em class="sig-param">hidden_channels</em>, <em class="sig-param">i2h_kernel</em>, <em class="sig-param">h2h_kernel</em>, <em class="sig-param">i2h_pad=(0</em>, <em class="sig-param">)</em>, <em class="sig-param">i2h_dilate=(1</em>, <em class="sig-param">)</em>, <em class="sig-param">h2h_dilate=(1</em>, <em class="sig-param">)</em>, <em class="sig-param">i2h_weight_initializer=None</em>, <em class="sig-param">h2h_weight_initializer=None</em>, <em class="sig-param">i2h_bias_initializer='zeros'</em>, <em class="sig-param">h2h_bias_initializer='zeros'</em>, <em class="sig-param">conv_layout='NCW'</em>, <em class="sig-param">activation='tanh'</em>, <em class="sig-param">prefix=None</em>, <em class="sig-param">params=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/rnn/conv_rnn_cell.html#Conv1DRNNCell"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.rnn.Conv1DRNNCell" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.contrib.rnn.conv_rnn_cell._ConvRNNCell</span></code></p> |
| <p>1D Convolutional RNN cell.</p> |
| <div class="math notranslate nohighlight"> |
| \[h_t = tanh(W_i \ast x_t + R_i \ast h_{t-1} + b_i)\]</div> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>input_shape</strong> (<em>tuple of int</em>) – Input tensor shape at each time step for each sample, excluding dimension of the batch size |
| and sequence length. Must be consistent with <cite>conv_layout</cite>. |
| For example, for layout ‘NCW’ the shape should be (C, W).</p></li> |
| <li><p><strong>hidden_channels</strong> (<em>int</em>) – Number of output channels.</p></li> |
| <li><p><strong>i2h_kernel</strong> (<em>int</em><em> or </em><em>tuple of int</em>) – Input convolution kernel sizes.</p></li> |
| <li><p><strong>h2h_kernel</strong> (<em>int</em><em> or </em><em>tuple of int</em>) – Recurrent convolution kernel sizes. Only odd-numbered sizes are supported.</p></li> |
| <li><p><strong>i2h_pad</strong> (<em>int</em><em> or </em><em>tuple of int</em><em>, </em><em>default</em><em> (</em><em>0</em><em>,</em><em>)</em>) – Pad for input convolution.</p></li> |
| <li><p><strong>i2h_dilate</strong> (<em>int</em><em> or </em><em>tuple of int</em><em>, </em><em>default</em><em> (</em><em>1</em><em>,</em><em>)</em>) – Input convolution dilate.</p></li> |
| <li><p><strong>h2h_dilate</strong> (<em>int</em><em> or </em><em>tuple of int</em><em>, </em><em>default</em><em> (</em><em>1</em><em>,</em><em>)</em>) – Recurrent convolution dilate.</p></li> |
| <li><p><strong>i2h_weight_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Initializer for the input weights matrix, used for the input convolutions.</p></li> |
| <li><p><strong>h2h_weight_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Initializer for the recurrent weights matrix, used for the input convolutions.</p></li> |
| <li><p><strong>i2h_bias_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a><em>, </em><em>default zeros</em>) – Initializer for the input convolution bias vectors.</p></li> |
| <li><p><strong>h2h_bias_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a><em>, </em><em>default zeros</em>) – Initializer for the recurrent convolution bias vectors.</p></li> |
| <li><p><strong>conv_layout</strong> (<em>str</em><em>, </em><em>default 'NCW'</em>) – Layout for all convolution inputs, outputs and weights. Options are ‘NCW’ and ‘NWC’.</p></li> |
| <li><p><strong>activation</strong> (<em>str</em><em> or </em><a class="reference internal" href="../block.html#mxnet.gluon.Block" title="mxnet.gluon.Block"><em>gluon.Block</em></a><em>, </em><em>default 'tanh'</em>) – Type of activation function. |
| If argument type is string, it’s equivalent to nn.Activation(act_type=str). See |
| <a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.Activation" title="mxnet.ndarray.Activation"><code class="xref py py-func docutils literal notranslate"><span class="pre">Activation()</span></code></a> for available choices. |
| Alternatively, other activation blocks such as nn.LeakyReLU can be used.</p></li> |
| <li><p><strong>prefix</strong> (str, default <code class="docutils literal notranslate"><span class="pre">'conv_rnn_</span></code>’) – Prefix for name of layers (and name of weight if params is None).</p></li> |
| <li><p><strong>params</strong> (<em>RNNParams</em><em>, </em><em>default None</em>) – Container for weight sharing between cells. Created if None.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.rnn.Conv2DRNNCell"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.contrib.rnn.</code><code class="sig-name descname">Conv2DRNNCell</code><span class="sig-paren">(</span><em class="sig-param">input_shape</em>, <em class="sig-param">hidden_channels</em>, <em class="sig-param">i2h_kernel</em>, <em class="sig-param">h2h_kernel</em>, <em class="sig-param">i2h_pad=(0</em>, <em class="sig-param">0)</em>, <em class="sig-param">i2h_dilate=(1</em>, <em class="sig-param">1)</em>, <em class="sig-param">h2h_dilate=(1</em>, <em class="sig-param">1)</em>, <em class="sig-param">i2h_weight_initializer=None</em>, <em class="sig-param">h2h_weight_initializer=None</em>, <em class="sig-param">i2h_bias_initializer='zeros'</em>, <em class="sig-param">h2h_bias_initializer='zeros'</em>, <em class="sig-param">conv_layout='NCHW'</em>, <em class="sig-param">activation='tanh'</em>, <em class="sig-param">prefix=None</em>, <em class="sig-param">params=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/rnn/conv_rnn_cell.html#Conv2DRNNCell"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.rnn.Conv2DRNNCell" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.contrib.rnn.conv_rnn_cell._ConvRNNCell</span></code></p> |
| <p>2D Convolutional RNN cell.</p> |
| <div class="math notranslate nohighlight"> |
| \[h_t = tanh(W_i \ast x_t + R_i \ast h_{t-1} + b_i)\]</div> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>input_shape</strong> (<em>tuple of int</em>) – Input tensor shape at each time step for each sample, excluding dimension of the batch size |
| and sequence length. Must be consistent with <cite>conv_layout</cite>. |
| For example, for layout ‘NCHW’ the shape should be (C, H, W).</p></li> |
| <li><p><strong>hidden_channels</strong> (<em>int</em>) – Number of output channels.</p></li> |
| <li><p><strong>i2h_kernel</strong> (<em>int</em><em> or </em><em>tuple of int</em>) – Input convolution kernel sizes.</p></li> |
| <li><p><strong>h2h_kernel</strong> (<em>int</em><em> or </em><em>tuple of int</em>) – Recurrent convolution kernel sizes. Only odd-numbered sizes are supported.</p></li> |
| <li><p><strong>i2h_pad</strong> (<em>int</em><em> or </em><em>tuple of int</em><em>, </em><em>default</em><em> (</em><em>0</em><em>, </em><em>0</em><em>)</em>) – Pad for input convolution.</p></li> |
| <li><p><strong>i2h_dilate</strong> (<em>int</em><em> or </em><em>tuple of int</em><em>, </em><em>default</em><em> (</em><em>1</em><em>, </em><em>1</em><em>)</em>) – Input convolution dilate.</p></li> |
| <li><p><strong>h2h_dilate</strong> (<em>int</em><em> or </em><em>tuple of int</em><em>, </em><em>default</em><em> (</em><em>1</em><em>, </em><em>1</em><em>)</em>) – Recurrent convolution dilate.</p></li> |
| <li><p><strong>i2h_weight_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Initializer for the input weights matrix, used for the input convolutions.</p></li> |
| <li><p><strong>h2h_weight_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Initializer for the recurrent weights matrix, used for the input convolutions.</p></li> |
| <li><p><strong>i2h_bias_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a><em>, </em><em>default zeros</em>) – Initializer for the input convolution bias vectors.</p></li> |
| <li><p><strong>h2h_bias_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a><em>, </em><em>default zeros</em>) – Initializer for the recurrent convolution bias vectors.</p></li> |
| <li><p><strong>conv_layout</strong> (<em>str</em><em>, </em><em>default 'NCHW'</em>) – Layout for all convolution inputs, outputs and weights. Options are ‘NCHW’ and ‘NHWC’.</p></li> |
| <li><p><strong>activation</strong> (<em>str</em><em> or </em><a class="reference internal" href="../block.html#mxnet.gluon.Block" title="mxnet.gluon.Block"><em>gluon.Block</em></a><em>, </em><em>default 'tanh'</em>) – Type of activation function. |
| If argument type is string, it’s equivalent to nn.Activation(act_type=str). See |
| <a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.Activation" title="mxnet.ndarray.Activation"><code class="xref py py-func docutils literal notranslate"><span class="pre">Activation()</span></code></a> for available choices. |
| Alternatively, other activation blocks such as nn.LeakyReLU can be used.</p></li> |
| <li><p><strong>prefix</strong> (str, default <code class="docutils literal notranslate"><span class="pre">'conv_rnn_</span></code>’) – Prefix for name of layers (and name of weight if params is None).</p></li> |
| <li><p><strong>params</strong> (<em>RNNParams</em><em>, </em><em>default None</em>) – Container for weight sharing between cells. Created if None.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.rnn.Conv3DRNNCell"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.contrib.rnn.</code><code class="sig-name descname">Conv3DRNNCell</code><span class="sig-paren">(</span><em class="sig-param">input_shape</em>, <em class="sig-param">hidden_channels</em>, <em class="sig-param">i2h_kernel</em>, <em class="sig-param">h2h_kernel</em>, <em class="sig-param">i2h_pad=(0</em>, <em class="sig-param">0</em>, <em class="sig-param">0)</em>, <em class="sig-param">i2h_dilate=(1</em>, <em class="sig-param">1</em>, <em class="sig-param">1)</em>, <em class="sig-param">h2h_dilate=(1</em>, <em class="sig-param">1</em>, <em class="sig-param">1)</em>, <em class="sig-param">i2h_weight_initializer=None</em>, <em class="sig-param">h2h_weight_initializer=None</em>, <em class="sig-param">i2h_bias_initializer='zeros'</em>, <em class="sig-param">h2h_bias_initializer='zeros'</em>, <em class="sig-param">conv_layout='NCDHW'</em>, <em class="sig-param">activation='tanh'</em>, <em class="sig-param">prefix=None</em>, <em class="sig-param">params=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/rnn/conv_rnn_cell.html#Conv3DRNNCell"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.rnn.Conv3DRNNCell" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.contrib.rnn.conv_rnn_cell._ConvRNNCell</span></code></p> |
| <p>3D Convolutional RNN cells</p> |
| <div class="math notranslate nohighlight"> |
| \[h_t = tanh(W_i \ast x_t + R_i \ast h_{t-1} + b_i)\]</div> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>input_shape</strong> (<em>tuple of int</em>) – Input tensor shape at each time step for each sample, excluding dimension of the batch size |
| and sequence length. Must be consistent with <cite>conv_layout</cite>. |
| For example, for layout ‘NCDHW’ the shape should be (C, D, H, W).</p></li> |
| <li><p><strong>hidden_channels</strong> (<em>int</em>) – Number of output channels.</p></li> |
| <li><p><strong>i2h_kernel</strong> (<em>int</em><em> or </em><em>tuple of int</em>) – Input convolution kernel sizes.</p></li> |
| <li><p><strong>h2h_kernel</strong> (<em>int</em><em> or </em><em>tuple of int</em>) – Recurrent convolution kernel sizes. Only odd-numbered sizes are supported.</p></li> |
| <li><p><strong>i2h_pad</strong> (<em>int</em><em> or </em><em>tuple of int</em><em>, </em><em>default</em><em> (</em><em>0</em><em>, </em><em>0</em><em>, </em><em>0</em><em>)</em>) – Pad for input convolution.</p></li> |
| <li><p><strong>i2h_dilate</strong> (<em>int</em><em> or </em><em>tuple of int</em><em>, </em><em>default</em><em> (</em><em>1</em><em>, </em><em>1</em><em>, </em><em>1</em><em>)</em>) – Input convolution dilate.</p></li> |
| <li><p><strong>h2h_dilate</strong> (<em>int</em><em> or </em><em>tuple of int</em><em>, </em><em>default</em><em> (</em><em>1</em><em>, </em><em>1</em><em>, </em><em>1</em><em>)</em>) – Recurrent convolution dilate.</p></li> |
| <li><p><strong>i2h_weight_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Initializer for the input weights matrix, used for the input convolutions.</p></li> |
| <li><p><strong>h2h_weight_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Initializer for the recurrent weights matrix, used for the input convolutions.</p></li> |
| <li><p><strong>i2h_bias_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a><em>, </em><em>default zeros</em>) – Initializer for the input convolution bias vectors.</p></li> |
| <li><p><strong>h2h_bias_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a><em>, </em><em>default zeros</em>) – Initializer for the recurrent convolution bias vectors.</p></li> |
| <li><p><strong>conv_layout</strong> (<em>str</em><em>, </em><em>default 'NCDHW'</em>) – Layout for all convolution inputs, outputs and weights. Options are ‘NCDHW’ and ‘NDHWC’.</p></li> |
| <li><p><strong>activation</strong> (<em>str</em><em> or </em><a class="reference internal" href="../block.html#mxnet.gluon.Block" title="mxnet.gluon.Block"><em>gluon.Block</em></a><em>, </em><em>default 'tanh'</em>) – Type of activation function. |
| If argument type is string, it’s equivalent to nn.Activation(act_type=str). See |
| <a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.Activation" title="mxnet.ndarray.Activation"><code class="xref py py-func docutils literal notranslate"><span class="pre">Activation()</span></code></a> for available choices. |
| Alternatively, other activation blocks such as nn.LeakyReLU can be used.</p></li> |
| <li><p><strong>prefix</strong> (str, default <code class="docutils literal notranslate"><span class="pre">'conv_rnn_</span></code>’) – Prefix for name of layers (and name of weight if params is None).</p></li> |
| <li><p><strong>params</strong> (<em>RNNParams</em><em>, </em><em>default None</em>) – Container for weight sharing between cells. Created if None.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.rnn.Conv1DLSTMCell"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.contrib.rnn.</code><code class="sig-name descname">Conv1DLSTMCell</code><span class="sig-paren">(</span><em class="sig-param">input_shape</em>, <em class="sig-param">hidden_channels</em>, <em class="sig-param">i2h_kernel</em>, <em class="sig-param">h2h_kernel</em>, <em class="sig-param">i2h_pad=(0</em>, <em class="sig-param">)</em>, <em class="sig-param">i2h_dilate=(1</em>, <em class="sig-param">)</em>, <em class="sig-param">h2h_dilate=(1</em>, <em class="sig-param">)</em>, <em class="sig-param">i2h_weight_initializer=None</em>, <em class="sig-param">h2h_weight_initializer=None</em>, <em class="sig-param">i2h_bias_initializer='zeros'</em>, <em class="sig-param">h2h_bias_initializer='zeros'</em>, <em class="sig-param">conv_layout='NCW'</em>, <em class="sig-param">activation='tanh'</em>, <em class="sig-param">prefix=None</em>, <em class="sig-param">params=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/rnn/conv_rnn_cell.html#Conv1DLSTMCell"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.rnn.Conv1DLSTMCell" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.contrib.rnn.conv_rnn_cell._ConvLSTMCell</span></code></p> |
| <p>1D Convolutional LSTM network cell.</p> |
| <p><a class="reference external" href="https://arxiv.org/abs/1506.04214">“Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting”</a> paper. Xingjian et al. NIPS2015</p> |
| <div class="math notranslate nohighlight"> |
| \[\begin{split}\begin{array}{ll} |
| i_t = \sigma(W_i \ast x_t + R_i \ast h_{t-1} + b_i) \\ |
| f_t = \sigma(W_f \ast x_t + R_f \ast h_{t-1} + b_f) \\ |
| o_t = \sigma(W_o \ast x_t + R_o \ast h_{t-1} + b_o) \\ |
| c^\prime_t = tanh(W_c \ast x_t + R_c \ast h_{t-1} + b_c) \\ |
| c_t = f_t \circ c_{t-1} + i_t \circ c^\prime_t \\ |
| h_t = o_t \circ tanh(c_t) \\ |
| \end{array}\end{split}\]</div> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>input_shape</strong> (<em>tuple of int</em>) – Input tensor shape at each time step for each sample, excluding dimension of the batch size |
| and sequence length. Must be consistent with <cite>conv_layout</cite>. |
| For example, for layout ‘NCW’ the shape should be (C, W).</p></li> |
| <li><p><strong>hidden_channels</strong> (<em>int</em>) – Number of output channels.</p></li> |
| <li><p><strong>i2h_kernel</strong> (<em>int</em><em> or </em><em>tuple of int</em>) – Input convolution kernel sizes.</p></li> |
| <li><p><strong>h2h_kernel</strong> (<em>int</em><em> or </em><em>tuple of int</em>) – Recurrent convolution kernel sizes. Only odd-numbered sizes are supported.</p></li> |
| <li><p><strong>i2h_pad</strong> (<em>int</em><em> or </em><em>tuple of int</em><em>, </em><em>default</em><em> (</em><em>0</em><em>,</em><em>)</em>) – Pad for input convolution.</p></li> |
| <li><p><strong>i2h_dilate</strong> (<em>int</em><em> or </em><em>tuple of int</em><em>, </em><em>default</em><em> (</em><em>1</em><em>,</em><em>)</em>) – Input convolution dilate.</p></li> |
| <li><p><strong>h2h_dilate</strong> (<em>int</em><em> or </em><em>tuple of int</em><em>, </em><em>default</em><em> (</em><em>1</em><em>,</em><em>)</em>) – Recurrent convolution dilate.</p></li> |
| <li><p><strong>i2h_weight_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Initializer for the input weights matrix, used for the input convolutions.</p></li> |
| <li><p><strong>h2h_weight_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Initializer for the recurrent weights matrix, used for the input convolutions.</p></li> |
| <li><p><strong>i2h_bias_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a><em>, </em><em>default zeros</em>) – Initializer for the input convolution bias vectors.</p></li> |
| <li><p><strong>h2h_bias_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a><em>, </em><em>default zeros</em>) – Initializer for the recurrent convolution bias vectors.</p></li> |
| <li><p><strong>conv_layout</strong> (<em>str</em><em>, </em><em>default 'NCW'</em>) – Layout for all convolution inputs, outputs and weights. Options are ‘NCW’ and ‘NWC’.</p></li> |
| <li><p><strong>activation</strong> (<em>str</em><em> or </em><a class="reference internal" href="../block.html#mxnet.gluon.Block" title="mxnet.gluon.Block"><em>gluon.Block</em></a><em>, </em><em>default 'tanh'</em>) – Type of activation function used in c^prime_t. |
| If argument type is string, it’s equivalent to nn.Activation(act_type=str). See |
| <a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.Activation" title="mxnet.ndarray.Activation"><code class="xref py py-func docutils literal notranslate"><span class="pre">Activation()</span></code></a> for available choices. |
| Alternatively, other activation blocks such as nn.LeakyReLU can be used.</p></li> |
| <li><p><strong>prefix</strong> (str, default <code class="docutils literal notranslate"><span class="pre">'conv_lstm_</span></code>’) – Prefix for name of layers (and name of weight if params is None).</p></li> |
| <li><p><strong>params</strong> (<em>RNNParams</em><em>, </em><em>default None</em>) – Container for weight sharing between cells. Created if None.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.rnn.Conv2DLSTMCell"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.contrib.rnn.</code><code class="sig-name descname">Conv2DLSTMCell</code><span class="sig-paren">(</span><em class="sig-param">input_shape</em>, <em class="sig-param">hidden_channels</em>, <em class="sig-param">i2h_kernel</em>, <em class="sig-param">h2h_kernel</em>, <em class="sig-param">i2h_pad=(0</em>, <em class="sig-param">0)</em>, <em class="sig-param">i2h_dilate=(1</em>, <em class="sig-param">1)</em>, <em class="sig-param">h2h_dilate=(1</em>, <em class="sig-param">1)</em>, <em class="sig-param">i2h_weight_initializer=None</em>, <em class="sig-param">h2h_weight_initializer=None</em>, <em class="sig-param">i2h_bias_initializer='zeros'</em>, <em class="sig-param">h2h_bias_initializer='zeros'</em>, <em class="sig-param">conv_layout='NCHW'</em>, <em class="sig-param">activation='tanh'</em>, <em class="sig-param">prefix=None</em>, <em class="sig-param">params=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/rnn/conv_rnn_cell.html#Conv2DLSTMCell"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.rnn.Conv2DLSTMCell" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.contrib.rnn.conv_rnn_cell._ConvLSTMCell</span></code></p> |
| <p>2D Convolutional LSTM network cell.</p> |
| <p><a class="reference external" href="https://arxiv.org/abs/1506.04214">“Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting”</a> paper. Xingjian et al. NIPS2015</p> |
| <div class="math notranslate nohighlight"> |
| \[\begin{split}\begin{array}{ll} |
| i_t = \sigma(W_i \ast x_t + R_i \ast h_{t-1} + b_i) \\ |
| f_t = \sigma(W_f \ast x_t + R_f \ast h_{t-1} + b_f) \\ |
| o_t = \sigma(W_o \ast x_t + R_o \ast h_{t-1} + b_o) \\ |
| c^\prime_t = tanh(W_c \ast x_t + R_c \ast h_{t-1} + b_c) \\ |
| c_t = f_t \circ c_{t-1} + i_t \circ c^\prime_t \\ |
| h_t = o_t \circ tanh(c_t) \\ |
| \end{array}\end{split}\]</div> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>input_shape</strong> (<em>tuple of int</em>) – Input tensor shape at each time step for each sample, excluding dimension of the batch size |
| and sequence length. Must be consistent with <cite>conv_layout</cite>. |
| For example, for layout ‘NCHW’ the shape should be (C, H, W).</p></li> |
| <li><p><strong>hidden_channels</strong> (<em>int</em>) – Number of output channels.</p></li> |
| <li><p><strong>i2h_kernel</strong> (<em>int</em><em> or </em><em>tuple of int</em>) – Input convolution kernel sizes.</p></li> |
| <li><p><strong>h2h_kernel</strong> (<em>int</em><em> or </em><em>tuple of int</em>) – Recurrent convolution kernel sizes. Only odd-numbered sizes are supported.</p></li> |
| <li><p><strong>i2h_pad</strong> (<em>int</em><em> or </em><em>tuple of int</em><em>, </em><em>default</em><em> (</em><em>0</em><em>, </em><em>0</em><em>)</em>) – Pad for input convolution.</p></li> |
| <li><p><strong>i2h_dilate</strong> (<em>int</em><em> or </em><em>tuple of int</em><em>, </em><em>default</em><em> (</em><em>1</em><em>, </em><em>1</em><em>)</em>) – Input convolution dilate.</p></li> |
| <li><p><strong>h2h_dilate</strong> (<em>int</em><em> or </em><em>tuple of int</em><em>, </em><em>default</em><em> (</em><em>1</em><em>, </em><em>1</em><em>)</em>) – Recurrent convolution dilate.</p></li> |
| <li><p><strong>i2h_weight_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Initializer for the input weights matrix, used for the input convolutions.</p></li> |
| <li><p><strong>h2h_weight_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Initializer for the recurrent weights matrix, used for the input convolutions.</p></li> |
| <li><p><strong>i2h_bias_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a><em>, </em><em>default zeros</em>) – Initializer for the input convolution bias vectors.</p></li> |
| <li><p><strong>h2h_bias_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a><em>, </em><em>default zeros</em>) – Initializer for the recurrent convolution bias vectors.</p></li> |
| <li><p><strong>conv_layout</strong> (<em>str</em><em>, </em><em>default 'NCHW'</em>) – Layout for all convolution inputs, outputs and weights. Options are ‘NCHW’ and ‘NHWC’.</p></li> |
| <li><p><strong>activation</strong> (<em>str</em><em> or </em><a class="reference internal" href="../block.html#mxnet.gluon.Block" title="mxnet.gluon.Block"><em>gluon.Block</em></a><em>, </em><em>default 'tanh'</em>) – Type of activation function used in c^prime_t. |
| If argument type is string, it’s equivalent to nn.Activation(act_type=str). See |
| <a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.Activation" title="mxnet.ndarray.Activation"><code class="xref py py-func docutils literal notranslate"><span class="pre">Activation()</span></code></a> for available choices. |
| Alternatively, other activation blocks such as nn.LeakyReLU can be used.</p></li> |
| <li><p><strong>prefix</strong> (str, default <code class="docutils literal notranslate"><span class="pre">'conv_lstm_</span></code>’) – Prefix for name of layers (and name of weight if params is None).</p></li> |
| <li><p><strong>params</strong> (<em>RNNParams</em><em>, </em><em>default None</em>) – Container for weight sharing between cells. Created if None.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.rnn.Conv3DLSTMCell"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.contrib.rnn.</code><code class="sig-name descname">Conv3DLSTMCell</code><span class="sig-paren">(</span><em class="sig-param">input_shape</em>, <em class="sig-param">hidden_channels</em>, <em class="sig-param">i2h_kernel</em>, <em class="sig-param">h2h_kernel</em>, <em class="sig-param">i2h_pad=(0</em>, <em class="sig-param">0</em>, <em class="sig-param">0)</em>, <em class="sig-param">i2h_dilate=(1</em>, <em class="sig-param">1</em>, <em class="sig-param">1)</em>, <em class="sig-param">h2h_dilate=(1</em>, <em class="sig-param">1</em>, <em class="sig-param">1)</em>, <em class="sig-param">i2h_weight_initializer=None</em>, <em class="sig-param">h2h_weight_initializer=None</em>, <em class="sig-param">i2h_bias_initializer='zeros'</em>, <em class="sig-param">h2h_bias_initializer='zeros'</em>, <em class="sig-param">conv_layout='NCDHW'</em>, <em class="sig-param">activation='tanh'</em>, <em class="sig-param">prefix=None</em>, <em class="sig-param">params=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/rnn/conv_rnn_cell.html#Conv3DLSTMCell"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.rnn.Conv3DLSTMCell" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.contrib.rnn.conv_rnn_cell._ConvLSTMCell</span></code></p> |
| <p>3D Convolutional LSTM network cell.</p> |
| <p><a class="reference external" href="https://arxiv.org/abs/1506.04214">“Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting”</a> paper. Xingjian et al. NIPS2015</p> |
| <div class="math notranslate nohighlight"> |
| \[\begin{split}\begin{array}{ll} |
| i_t = \sigma(W_i \ast x_t + R_i \ast h_{t-1} + b_i) \\ |
| f_t = \sigma(W_f \ast x_t + R_f \ast h_{t-1} + b_f) \\ |
| o_t = \sigma(W_o \ast x_t + R_o \ast h_{t-1} + b_o) \\ |
| c^\prime_t = tanh(W_c \ast x_t + R_c \ast h_{t-1} + b_c) \\ |
| c_t = f_t \circ c_{t-1} + i_t \circ c^\prime_t \\ |
| h_t = o_t \circ tanh(c_t) \\ |
| \end{array}\end{split}\]</div> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>input_shape</strong> (<em>tuple of int</em>) – Input tensor shape at each time step for each sample, excluding dimension of the batch size |
| and sequence length. Must be consistent with <cite>conv_layout</cite>. |
| For example, for layout ‘NCDHW’ the shape should be (C, D, H, W).</p></li> |
| <li><p><strong>hidden_channels</strong> (<em>int</em>) – Number of output channels.</p></li> |
| <li><p><strong>i2h_kernel</strong> (<em>int</em><em> or </em><em>tuple of int</em>) – Input convolution kernel sizes.</p></li> |
| <li><p><strong>h2h_kernel</strong> (<em>int</em><em> or </em><em>tuple of int</em>) – Recurrent convolution kernel sizes. Only odd-numbered sizes are supported.</p></li> |
| <li><p><strong>i2h_pad</strong> (<em>int</em><em> or </em><em>tuple of int</em><em>, </em><em>default</em><em> (</em><em>0</em><em>, </em><em>0</em><em>, </em><em>0</em><em>)</em>) – Pad for input convolution.</p></li> |
| <li><p><strong>i2h_dilate</strong> (<em>int</em><em> or </em><em>tuple of int</em><em>, </em><em>default</em><em> (</em><em>1</em><em>, </em><em>1</em><em>, </em><em>1</em><em>)</em>) – Input convolution dilate.</p></li> |
| <li><p><strong>h2h_dilate</strong> (<em>int</em><em> or </em><em>tuple of int</em><em>, </em><em>default</em><em> (</em><em>1</em><em>, </em><em>1</em><em>, </em><em>1</em><em>)</em>) – Recurrent convolution dilate.</p></li> |
| <li><p><strong>i2h_weight_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Initializer for the input weights matrix, used for the input convolutions.</p></li> |
| <li><p><strong>h2h_weight_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Initializer for the recurrent weights matrix, used for the input convolutions.</p></li> |
| <li><p><strong>i2h_bias_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a><em>, </em><em>default zeros</em>) – Initializer for the input convolution bias vectors.</p></li> |
| <li><p><strong>h2h_bias_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a><em>, </em><em>default zeros</em>) – Initializer for the recurrent convolution bias vectors.</p></li> |
| <li><p><strong>conv_layout</strong> (<em>str</em><em>, </em><em>default 'NCDHW'</em>) – Layout for all convolution inputs, outputs and weights. Options are ‘NCDHW’ and ‘NDHWC’.</p></li> |
| <li><p><strong>activation</strong> (<em>str</em><em> or </em><a class="reference internal" href="../block.html#mxnet.gluon.Block" title="mxnet.gluon.Block"><em>gluon.Block</em></a><em>, </em><em>default 'tanh'</em>) – Type of activation function used in c^prime_t. |
| If argument type is string, it’s equivalent to nn.Activation(act_type=str). See |
| <a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.Activation" title="mxnet.ndarray.Activation"><code class="xref py py-func docutils literal notranslate"><span class="pre">Activation()</span></code></a> for available choices. |
| Alternatively, other activation blocks such as nn.LeakyReLU can be used.</p></li> |
| <li><p><strong>prefix</strong> (str, default <code class="docutils literal notranslate"><span class="pre">'conv_lstm_</span></code>’) – Prefix for name of layers (and name of weight if params is None).</p></li> |
| <li><p><strong>params</strong> (<em>RNNParams</em><em>, </em><em>default None</em>) – Container for weight sharing between cells. Created if None.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.rnn.Conv1DGRUCell"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.contrib.rnn.</code><code class="sig-name descname">Conv1DGRUCell</code><span class="sig-paren">(</span><em class="sig-param">input_shape</em>, <em class="sig-param">hidden_channels</em>, <em class="sig-param">i2h_kernel</em>, <em class="sig-param">h2h_kernel</em>, <em class="sig-param">i2h_pad=(0</em>, <em class="sig-param">)</em>, <em class="sig-param">i2h_dilate=(1</em>, <em class="sig-param">)</em>, <em class="sig-param">h2h_dilate=(1</em>, <em class="sig-param">)</em>, <em class="sig-param">i2h_weight_initializer=None</em>, <em class="sig-param">h2h_weight_initializer=None</em>, <em class="sig-param">i2h_bias_initializer='zeros'</em>, <em class="sig-param">h2h_bias_initializer='zeros'</em>, <em class="sig-param">conv_layout='NCW'</em>, <em class="sig-param">activation='tanh'</em>, <em class="sig-param">prefix=None</em>, <em class="sig-param">params=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/rnn/conv_rnn_cell.html#Conv1DGRUCell"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.rnn.Conv1DGRUCell" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.contrib.rnn.conv_rnn_cell._ConvGRUCell</span></code></p> |
| <p>1D Convolutional Gated Rectified Unit (GRU) network cell.</p> |
| <div class="math notranslate nohighlight"> |
| \[\begin{split}\begin{array}{ll} |
| r_t = \sigma(W_r \ast x_t + R_r \ast h_{t-1} + b_r) \\ |
| z_t = \sigma(W_z \ast x_t + R_z \ast h_{t-1} + b_z) \\ |
| n_t = tanh(W_i \ast x_t + b_i + r_t \circ (R_n \ast h_{t-1} + b_n)) \\ |
| h^\prime_t = (1 - z_t) \circ n_t + z_t \circ h \\ |
| \end{array}\end{split}\]</div> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>input_shape</strong> (<em>tuple of int</em>) – Input tensor shape at each time step for each sample, excluding dimension of the batch size |
| and sequence length. Must be consistent with <cite>conv_layout</cite>. |
| For example, for layout ‘NCW’ the shape should be (C, W).</p></li> |
| <li><p><strong>hidden_channels</strong> (<em>int</em>) – Number of output channels.</p></li> |
| <li><p><strong>i2h_kernel</strong> (<em>int</em><em> or </em><em>tuple of int</em>) – Input convolution kernel sizes.</p></li> |
| <li><p><strong>h2h_kernel</strong> (<em>int</em><em> or </em><em>tuple of int</em>) – Recurrent convolution kernel sizes. Only odd-numbered sizes are supported.</p></li> |
| <li><p><strong>i2h_pad</strong> (<em>int</em><em> or </em><em>tuple of int</em><em>, </em><em>default</em><em> (</em><em>0</em><em>,</em><em>)</em>) – Pad for input convolution.</p></li> |
| <li><p><strong>i2h_dilate</strong> (<em>int</em><em> or </em><em>tuple of int</em><em>, </em><em>default</em><em> (</em><em>1</em><em>,</em><em>)</em>) – Input convolution dilate.</p></li> |
| <li><p><strong>h2h_dilate</strong> (<em>int</em><em> or </em><em>tuple of int</em><em>, </em><em>default</em><em> (</em><em>1</em><em>,</em><em>)</em>) – Recurrent convolution dilate.</p></li> |
| <li><p><strong>i2h_weight_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Initializer for the input weights matrix, used for the input convolutions.</p></li> |
| <li><p><strong>h2h_weight_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Initializer for the recurrent weights matrix, used for the input convolutions.</p></li> |
| <li><p><strong>i2h_bias_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a><em>, </em><em>default zeros</em>) – Initializer for the input convolution bias vectors.</p></li> |
| <li><p><strong>h2h_bias_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a><em>, </em><em>default zeros</em>) – Initializer for the recurrent convolution bias vectors.</p></li> |
| <li><p><strong>conv_layout</strong> (<em>str</em><em>, </em><em>default 'NCW'</em>) – Layout for all convolution inputs, outputs and weights. Options are ‘NCW’ and ‘NWC’.</p></li> |
| <li><p><strong>activation</strong> (<em>str</em><em> or </em><a class="reference internal" href="../block.html#mxnet.gluon.Block" title="mxnet.gluon.Block"><em>gluon.Block</em></a><em>, </em><em>default 'tanh'</em>) – Type of activation function used in n_t. |
| If argument type is string, it’s equivalent to nn.Activation(act_type=str). See |
| <a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.Activation" title="mxnet.ndarray.Activation"><code class="xref py py-func docutils literal notranslate"><span class="pre">Activation()</span></code></a> for available choices. |
| Alternatively, other activation blocks such as nn.LeakyReLU can be used.</p></li> |
| <li><p><strong>prefix</strong> (str, default <code class="docutils literal notranslate"><span class="pre">'conv_gru_</span></code>’) – Prefix for name of layers (and name of weight if params is None).</p></li> |
| <li><p><strong>params</strong> (<em>RNNParams</em><em>, </em><em>default None</em>) – Container for weight sharing between cells. Created if None.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.rnn.Conv2DGRUCell"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.contrib.rnn.</code><code class="sig-name descname">Conv2DGRUCell</code><span class="sig-paren">(</span><em class="sig-param">input_shape</em>, <em class="sig-param">hidden_channels</em>, <em class="sig-param">i2h_kernel</em>, <em class="sig-param">h2h_kernel</em>, <em class="sig-param">i2h_pad=(0</em>, <em class="sig-param">0)</em>, <em class="sig-param">i2h_dilate=(1</em>, <em class="sig-param">1)</em>, <em class="sig-param">h2h_dilate=(1</em>, <em class="sig-param">1)</em>, <em class="sig-param">i2h_weight_initializer=None</em>, <em class="sig-param">h2h_weight_initializer=None</em>, <em class="sig-param">i2h_bias_initializer='zeros'</em>, <em class="sig-param">h2h_bias_initializer='zeros'</em>, <em class="sig-param">conv_layout='NCHW'</em>, <em class="sig-param">activation='tanh'</em>, <em class="sig-param">prefix=None</em>, <em class="sig-param">params=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/rnn/conv_rnn_cell.html#Conv2DGRUCell"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.rnn.Conv2DGRUCell" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.contrib.rnn.conv_rnn_cell._ConvGRUCell</span></code></p> |
| <p>2D Convolutional Gated Rectified Unit (GRU) network cell.</p> |
| <div class="math notranslate nohighlight"> |
| \[\begin{split}\begin{array}{ll} |
| r_t = \sigma(W_r \ast x_t + R_r \ast h_{t-1} + b_r) \\ |
| z_t = \sigma(W_z \ast x_t + R_z \ast h_{t-1} + b_z) \\ |
| n_t = tanh(W_i \ast x_t + b_i + r_t \circ (R_n \ast h_{t-1} + b_n)) \\ |
| h^\prime_t = (1 - z_t) \circ n_t + z_t \circ h \\ |
| \end{array}\end{split}\]</div> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>input_shape</strong> (<em>tuple of int</em>) – Input tensor shape at each time step for each sample, excluding dimension of the batch size |
| and sequence length. Must be consistent with <cite>conv_layout</cite>. |
| For example, for layout ‘NCHW’ the shape should be (C, H, W).</p></li> |
| <li><p><strong>hidden_channels</strong> (<em>int</em>) – Number of output channels.</p></li> |
| <li><p><strong>i2h_kernel</strong> (<em>int</em><em> or </em><em>tuple of int</em>) – Input convolution kernel sizes.</p></li> |
| <li><p><strong>h2h_kernel</strong> (<em>int</em><em> or </em><em>tuple of int</em>) – Recurrent convolution kernel sizes. Only odd-numbered sizes are supported.</p></li> |
| <li><p><strong>i2h_pad</strong> (<em>int</em><em> or </em><em>tuple of int</em><em>, </em><em>default</em><em> (</em><em>0</em><em>, </em><em>0</em><em>)</em>) – Pad for input convolution.</p></li> |
| <li><p><strong>i2h_dilate</strong> (<em>int</em><em> or </em><em>tuple of int</em><em>, </em><em>default</em><em> (</em><em>1</em><em>, </em><em>1</em><em>)</em>) – Input convolution dilate.</p></li> |
| <li><p><strong>h2h_dilate</strong> (<em>int</em><em> or </em><em>tuple of int</em><em>, </em><em>default</em><em> (</em><em>1</em><em>, </em><em>1</em><em>)</em>) – Recurrent convolution dilate.</p></li> |
| <li><p><strong>i2h_weight_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Initializer for the input weights matrix, used for the input convolutions.</p></li> |
| <li><p><strong>h2h_weight_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Initializer for the recurrent weights matrix, used for the input convolutions.</p></li> |
| <li><p><strong>i2h_bias_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a><em>, </em><em>default zeros</em>) – Initializer for the input convolution bias vectors.</p></li> |
| <li><p><strong>h2h_bias_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a><em>, </em><em>default zeros</em>) – Initializer for the recurrent convolution bias vectors.</p></li> |
| <li><p><strong>conv_layout</strong> (<em>str</em><em>, </em><em>default 'NCHW'</em>) – Layout for all convolution inputs, outputs and weights. Options are ‘NCHW’ and ‘NHWC’.</p></li> |
| <li><p><strong>activation</strong> (<em>str</em><em> or </em><a class="reference internal" href="../block.html#mxnet.gluon.Block" title="mxnet.gluon.Block"><em>gluon.Block</em></a><em>, </em><em>default 'tanh'</em>) – Type of activation function used in n_t. |
| If argument type is string, it’s equivalent to nn.Activation(act_type=str). See |
| <a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.Activation" title="mxnet.ndarray.Activation"><code class="xref py py-func docutils literal notranslate"><span class="pre">Activation()</span></code></a> for available choices. |
| Alternatively, other activation blocks such as nn.LeakyReLU can be used.</p></li> |
| <li><p><strong>prefix</strong> (str, default <code class="docutils literal notranslate"><span class="pre">'conv_gru_</span></code>’) – Prefix for name of layers (and name of weight if params is None).</p></li> |
| <li><p><strong>params</strong> (<em>RNNParams</em><em>, </em><em>default None</em>) – Container for weight sharing between cells. Created if None.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.rnn.Conv3DGRUCell"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.contrib.rnn.</code><code class="sig-name descname">Conv3DGRUCell</code><span class="sig-paren">(</span><em class="sig-param">input_shape</em>, <em class="sig-param">hidden_channels</em>, <em class="sig-param">i2h_kernel</em>, <em class="sig-param">h2h_kernel</em>, <em class="sig-param">i2h_pad=(0</em>, <em class="sig-param">0</em>, <em class="sig-param">0)</em>, <em class="sig-param">i2h_dilate=(1</em>, <em class="sig-param">1</em>, <em class="sig-param">1)</em>, <em class="sig-param">h2h_dilate=(1</em>, <em class="sig-param">1</em>, <em class="sig-param">1)</em>, <em class="sig-param">i2h_weight_initializer=None</em>, <em class="sig-param">h2h_weight_initializer=None</em>, <em class="sig-param">i2h_bias_initializer='zeros'</em>, <em class="sig-param">h2h_bias_initializer='zeros'</em>, <em class="sig-param">conv_layout='NCDHW'</em>, <em class="sig-param">activation='tanh'</em>, <em class="sig-param">prefix=None</em>, <em class="sig-param">params=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/rnn/conv_rnn_cell.html#Conv3DGRUCell"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.rnn.Conv3DGRUCell" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.contrib.rnn.conv_rnn_cell._ConvGRUCell</span></code></p> |
| <p>3D Convolutional Gated Rectified Unit (GRU) network cell.</p> |
| <div class="math notranslate nohighlight"> |
| \[\begin{split}\begin{array}{ll} |
| r_t = \sigma(W_r \ast x_t + R_r \ast h_{t-1} + b_r) \\ |
| z_t = \sigma(W_z \ast x_t + R_z \ast h_{t-1} + b_z) \\ |
| n_t = tanh(W_i \ast x_t + b_i + r_t \circ (R_n \ast h_{t-1} + b_n)) \\ |
| h^\prime_t = (1 - z_t) \circ n_t + z_t \circ h \\ |
| \end{array}\end{split}\]</div> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>input_shape</strong> (<em>tuple of int</em>) – Input tensor shape at each time step for each sample, excluding dimension of the batch size |
| and sequence length. Must be consistent with <cite>conv_layout</cite>. |
| For example, for layout ‘NCDHW’ the shape should be (C, D, H, W).</p></li> |
| <li><p><strong>hidden_channels</strong> (<em>int</em>) – Number of output channels.</p></li> |
| <li><p><strong>i2h_kernel</strong> (<em>int</em><em> or </em><em>tuple of int</em>) – Input convolution kernel sizes.</p></li> |
| <li><p><strong>h2h_kernel</strong> (<em>int</em><em> or </em><em>tuple of int</em>) – Recurrent convolution kernel sizes. Only odd-numbered sizes are supported.</p></li> |
| <li><p><strong>i2h_pad</strong> (<em>int</em><em> or </em><em>tuple of int</em><em>, </em><em>default</em><em> (</em><em>0</em><em>, </em><em>0</em><em>, </em><em>0</em><em>)</em>) – Pad for input convolution.</p></li> |
| <li><p><strong>i2h_dilate</strong> (<em>int</em><em> or </em><em>tuple of int</em><em>, </em><em>default</em><em> (</em><em>1</em><em>, </em><em>1</em><em>, </em><em>1</em><em>)</em>) – Input convolution dilate.</p></li> |
| <li><p><strong>h2h_dilate</strong> (<em>int</em><em> or </em><em>tuple of int</em><em>, </em><em>default</em><em> (</em><em>1</em><em>, </em><em>1</em><em>, </em><em>1</em><em>)</em>) – Recurrent convolution dilate.</p></li> |
| <li><p><strong>i2h_weight_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Initializer for the input weights matrix, used for the input convolutions.</p></li> |
| <li><p><strong>h2h_weight_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Initializer for the recurrent weights matrix, used for the input convolutions.</p></li> |
| <li><p><strong>i2h_bias_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a><em>, </em><em>default zeros</em>) – Initializer for the input convolution bias vectors.</p></li> |
| <li><p><strong>h2h_bias_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a><em>, </em><em>default zeros</em>) – Initializer for the recurrent convolution bias vectors.</p></li> |
| <li><p><strong>conv_layout</strong> (<em>str</em><em>, </em><em>default 'NCDHW'</em>) – Layout for all convolution inputs, outputs and weights. Options are ‘NCDHW’ and ‘NDHWC’.</p></li> |
| <li><p><strong>activation</strong> (<em>str</em><em> or </em><a class="reference internal" href="../block.html#mxnet.gluon.Block" title="mxnet.gluon.Block"><em>gluon.Block</em></a><em>, </em><em>default 'tanh'</em>) – Type of activation function used in n_t. |
| If argument type is string, it’s equivalent to nn.Activation(act_type=str). See |
| <a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.Activation" title="mxnet.ndarray.Activation"><code class="xref py py-func docutils literal notranslate"><span class="pre">Activation()</span></code></a> for available choices. |
| Alternatively, other activation blocks such as nn.LeakyReLU can be used.</p></li> |
| <li><p><strong>prefix</strong> (str, default <code class="docutils literal notranslate"><span class="pre">'conv_gru_</span></code>’) – Prefix for name of layers (and name of weight if params is None).</p></li> |
| <li><p><strong>params</strong> (<em>RNNParams</em><em>, </em><em>default None</em>) – Container for weight sharing between cells. Created if None.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.rnn.VariationalDropoutCell"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.contrib.rnn.</code><code class="sig-name descname">VariationalDropoutCell</code><span class="sig-paren">(</span><em class="sig-param">base_cell</em>, <em class="sig-param">drop_inputs=0.0</em>, <em class="sig-param">drop_states=0.0</em>, <em class="sig-param">drop_outputs=0.0</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/rnn/rnn_cell.html#VariationalDropoutCell"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.rnn.VariationalDropoutCell" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.rnn.rnn_cell.ModifierCell</span></code></p> |
| <p>Applies Variational Dropout on base cell. |
| <a class="reference external" href="https://arxiv.org/pdf/1512.05287.pdf">https://arxiv.org/pdf/1512.05287.pdf</a></p> |
| <p>Variational dropout uses the same dropout mask across time-steps. It can be applied to RNN |
| inputs, outputs, and states. The masks for them are not shared.</p> |
| <p>The dropout mask is initialized when stepping forward for the first time and will remain |
| the same until .reset() is called. Thus, if using the cell and stepping manually without calling |
| .unroll(), the .reset() should be called after each sequence.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>base_cell</strong> (<a class="reference internal" href="../rnn/index.html#mxnet.gluon.rnn.RecurrentCell" title="mxnet.gluon.rnn.RecurrentCell"><em>RecurrentCell</em></a>) – The cell on which to perform variational dropout.</p></li> |
| <li><p><strong>drop_inputs</strong> (<em>float</em><em>, </em><em>default 0.</em>) – The dropout rate for inputs. Won’t apply dropout if it equals 0.</p></li> |
| <li><p><strong>drop_states</strong> (<em>float</em><em>, </em><em>default 0.</em>) – The dropout rate for state inputs on the first state channel. |
| Won’t apply dropout if it equals 0.</p></li> |
| <li><p><strong>drop_outputs</strong> (<em>float</em><em>, </em><em>default 0.</em>) – The dropout rate for outputs. Won’t apply dropout if it equals 0.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="method"> |
| <dt id="mxnet.gluon.contrib.rnn.VariationalDropoutCell.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">inputs</em>, <em class="sig-param">states</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/rnn/rnn_cell.html#VariationalDropoutCell.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.rnn.VariationalDropoutCell.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.contrib.rnn.VariationalDropoutCell.reset"> |
| <code class="sig-name descname">reset</code><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/rnn/rnn_cell.html#VariationalDropoutCell.reset"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.rnn.VariationalDropoutCell.reset" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Reset before re-using the cell for another graph.</p> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.contrib.rnn.VariationalDropoutCell.unroll"> |
| <code class="sig-name descname">unroll</code><span class="sig-paren">(</span><em class="sig-param">length</em>, <em class="sig-param">inputs</em>, <em class="sig-param">begin_state=None</em>, <em class="sig-param">layout='NTC'</em>, <em class="sig-param">merge_outputs=None</em>, <em class="sig-param">valid_length=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/rnn/rnn_cell.html#VariationalDropoutCell.unroll"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.rnn.VariationalDropoutCell.unroll" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Unrolls an RNN cell across time steps.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>length</strong> (<em>int</em>) – Number of steps to unroll.</p></li> |
| <li><p><strong>inputs</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em>, </em><em>list of Symbol</em><em>, or </em><em>None</em>) – <p>If <cite>inputs</cite> is a single Symbol (usually the output |
| of Embedding symbol), it should have shape |
| (batch_size, length, …) if <cite>layout</cite> is ‘NTC’, |
| or (length, batch_size, …) if <cite>layout</cite> is ‘TNC’.</p> |
| <p>If <cite>inputs</cite> is a list of symbols (usually output of |
| previous unroll), they should all have shape |
| (batch_size, …).</p> |
| </p></li> |
| <li><p><strong>begin_state</strong> (<em>nested list of Symbol</em><em>, </em><em>optional</em>) – Input states created by <cite>begin_state()</cite> |
| or output state of another cell. |
| Created from <cite>begin_state()</cite> if <cite>None</cite>.</p></li> |
| <li><p><strong>layout</strong> (<em>str</em><em>, </em><em>optional</em>) – <cite>layout</cite> of input symbol. Only used if inputs |
| is a single Symbol.</p></li> |
| <li><p><strong>merge_outputs</strong> (<em>bool</em><em>, </em><em>optional</em>) – If <cite>False</cite>, returns outputs as a list of Symbols. |
| If <cite>True</cite>, concatenates output across time steps |
| and returns a single symbol with shape |
| (batch_size, length, …) if layout is ‘NTC’, |
| or (length, batch_size, …) if layout is ‘TNC’. |
| If <cite>None</cite>, output whatever is faster.</p></li> |
| <li><p><strong>valid_length</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em>, </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a><em> or </em><em>None</em>) – <cite>valid_length</cite> specifies the length of the sequences in the batch without padding. |
| This option is especially useful for building sequence-to-sequence models where |
| the input and output sequences would potentially be padded. |
| If <cite>valid_length</cite> is None, all sequences are assumed to have the same length. |
| If <cite>valid_length</cite> is a Symbol or NDArray, it should have shape (batch_size,). |
| The ith element will be the length of the ith sequence in the batch. |
| The last valid state will be return and the padded outputs will be masked with 0. |
| Note that <cite>valid_length</cite> must be smaller or equal to <cite>length</cite>.</p></li> |
| </ul> |
| </dd> |
| <dt class="field-even">Returns</dt> |
| <dd class="field-even"><p><ul class="simple"> |
| <li><p><strong>outputs</strong> (<em>list of Symbol or Symbol</em>) – Symbol (if <cite>merge_outputs</cite> is True) or list of Symbols |
| (if <cite>merge_outputs</cite> is False) corresponding to the output from |
| the RNN from this unrolling.</p></li> |
| <li><p><strong>states</strong> (<em>list of Symbol</em>) – The new state of this RNN after this unrolling. |
| The type of this symbol is same as the output of <cite>begin_state()</cite>.</p></li> |
| </ul> |
| </p> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.rnn.LSTMPCell"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.contrib.rnn.</code><code class="sig-name descname">LSTMPCell</code><span class="sig-paren">(</span><em class="sig-param">hidden_size</em>, <em class="sig-param">projection_size</em>, <em class="sig-param">i2h_weight_initializer=None</em>, <em class="sig-param">h2h_weight_initializer=None</em>, <em class="sig-param">h2r_weight_initializer=None</em>, <em class="sig-param">i2h_bias_initializer='zeros'</em>, <em class="sig-param">h2h_bias_initializer='zeros'</em>, <em class="sig-param">input_size=0</em>, <em class="sig-param">prefix=None</em>, <em class="sig-param">params=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/rnn/rnn_cell.html#LSTMPCell"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.rnn.LSTMPCell" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.rnn.rnn_cell.HybridRecurrentCell</span></code></p> |
| <p>Long-Short Term Memory Projected (LSTMP) network cell. |
| (<a class="reference external" href="https://arxiv.org/abs/1402.1128">https://arxiv.org/abs/1402.1128</a>)</p> |
| <p>Each call computes the following function:</p> |
| <div class="math notranslate nohighlight"> |
| \[\begin{split}\begin{array}{ll} |
| i_t = sigmoid(W_{ii} x_t + b_{ii} + W_{ri} r_{(t-1)} + b_{ri}) \\ |
| f_t = sigmoid(W_{if} x_t + b_{if} + W_{rf} r_{(t-1)} + b_{rf}) \\ |
| g_t = \tanh(W_{ig} x_t + b_{ig} + W_{rc} r_{(t-1)} + b_{rg}) \\ |
| o_t = sigmoid(W_{io} x_t + b_{io} + W_{ro} r_{(t-1)} + b_{ro}) \\ |
| c_t = f_t * c_{(t-1)} + i_t * g_t \\ |
| h_t = o_t * \tanh(c_t) \\ |
| r_t = W_{hr} h_t |
| \end{array}\end{split}\]</div> |
| <p>where <span class="math notranslate nohighlight">\(r_t\)</span> is the projected recurrent activation at time <cite>t</cite>, |
| <span class="math notranslate nohighlight">\(h_t\)</span> is the hidden state at time <cite>t</cite>, <span class="math notranslate nohighlight">\(c_t\)</span> is the |
| cell state at time <cite>t</cite>, <span class="math notranslate nohighlight">\(x_t\)</span> is the input at time <cite>t</cite>, and <span class="math notranslate nohighlight">\(i_t\)</span>, |
| <span class="math notranslate nohighlight">\(f_t\)</span>, <span class="math notranslate nohighlight">\(g_t\)</span>, <span class="math notranslate nohighlight">\(o_t\)</span> are the input, forget, cell, and |
| out gates, respectively.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>hidden_size</strong> (<em>int</em>) – Number of units in cell state symbol.</p></li> |
| <li><p><strong>projection_size</strong> (<em>int</em>) – Number of units in output symbol.</p></li> |
| <li><p><strong>i2h_weight_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Initializer for the input weights matrix, used for the linear |
| transformation of the inputs.</p></li> |
| <li><p><strong>h2h_weight_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Initializer for the recurrent weights matrix, used for the linear |
| transformation of the hidden state.</p></li> |
| <li><p><strong>h2r_weight_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Initializer for the projection weights matrix, used for the linear |
| transformation of the recurrent state.</p></li> |
| <li><p><strong>i2h_bias_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a><em>, </em><em>default 'lstmbias'</em>) – Initializer for the bias vector. By default, bias for the forget |
| gate is initialized to 1 while all other biases are initialized |
| to zero.</p></li> |
| <li><p><strong>h2h_bias_initializer</strong> (<em>str</em><em> or </em><a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Initializer for the bias vector.</p></li> |
| <li><p><strong>prefix</strong> (str, default <code class="docutils literal notranslate"><span class="pre">'lstmp_</span></code>’) – Prefix for name of <cite>Block`s |
| (and name of weight if params is `None</cite>).</p></li> |
| <li><p><strong>params</strong> (<a class="reference internal" href="../parameter.html#mxnet.gluon.Parameter" title="mxnet.gluon.Parameter"><em>Parameter</em></a><em> or </em><em>None</em>) – Container for weight sharing between cells. |
| Created if <cite>None</cite>.</p></li> |
| <li><p><strong>Inputs</strong> – <ul> |
| <li><p><strong>data</strong>: input tensor with shape <cite>(batch_size, input_size)</cite>.</p></li> |
| <li><p><strong>states</strong>: a list of two initial recurrent state tensors, with shape |
| <cite>(batch_size, projection_size)</cite> and <cite>(batch_size, hidden_size)</cite> respectively.</p></li> |
| </ul> |
| </p></li> |
| <li><p><strong>Outputs</strong> – <ul> |
| <li><p><strong>out</strong>: output tensor with shape <cite>(batch_size, num_hidden)</cite>.</p></li> |
| <li><p><strong>next_states</strong>: a list of two output recurrent state tensors. Each has |
| the same shape as <cite>states</cite>.</p></li> |
| </ul> |
| </p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="method"> |
| <dt id="mxnet.gluon.contrib.rnn.LSTMPCell.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">inputs</em>, <em class="sig-param">states</em>, <em class="sig-param">i2h_weight</em>, <em class="sig-param">h2h_weight</em>, <em class="sig-param">h2r_weight</em>, <em class="sig-param">i2h_bias</em>, <em class="sig-param">h2h_bias</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/rnn/rnn_cell.html#LSTMPCell.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.rnn.LSTMPCell.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.contrib.rnn.LSTMPCell.state_info"> |
| <code class="sig-name descname">state_info</code><span class="sig-paren">(</span><em class="sig-param">batch_size=0</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/rnn/rnn_cell.html#LSTMPCell.state_info"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.rnn.LSTMPCell.state_info" title="Permalink to this definition">¶</a></dt> |
| <dd><p>shape and layout information of states</p> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <span class="target" id="module-mxnet.gluon.contrib.data.sampler"></span><p>Dataset sampler.</p> |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.data.sampler.IntervalSampler"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.contrib.data.sampler.</code><code class="sig-name descname">IntervalSampler</code><span class="sig-paren">(</span><em class="sig-param">length</em>, <em class="sig-param">interval</em>, <em class="sig-param">rollover=True</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/data/sampler.html#IntervalSampler"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.data.sampler.IntervalSampler" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.data.sampler.Sampler</span></code></p> |
| <p>Samples elements from [0, length) at fixed intervals.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>length</strong> (<em>int</em>) – Length of the sequence.</p></li> |
| <li><p><strong>interval</strong> (<em>int</em>) – The number of items to skip between two samples.</p></li> |
| <li><p><strong>rollover</strong> (<em>bool</em><em>, </em><em>default True</em>) – Whether to start again from the first skipped item after reaching the end. |
| If true, this sampler would start again from the first skipped item until all items |
| are visited. |
| Otherwise, iteration stops when end is reached and skipped items are ignored.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <p class="rubric">Examples</p> |
| <div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="n">sampler</span> <span class="o">=</span> <span class="n">contrib</span><span class="o">.</span><span class="n">data</span><span class="o">.</span><span class="n">IntervalSampler</span><span class="p">(</span><span class="mi">13</span><span class="p">,</span> <span class="n">interval</span><span class="o">=</span><span class="mi">3</span><span class="p">)</span> |
| <span class="gp">>>> </span><span class="nb">list</span><span class="p">(</span><span class="n">sampler</span><span class="p">)</span> |
| <span class="go">[0, 3, 6, 9, 12, 1, 4, 7, 10, 2, 5, 8, 11]</span> |
| <span class="gp">>>> </span><span class="n">sampler</span> <span class="o">=</span> <span class="n">contrib</span><span class="o">.</span><span class="n">data</span><span class="o">.</span><span class="n">IntervalSampler</span><span class="p">(</span><span class="mi">13</span><span class="p">,</span> <span class="n">interval</span><span class="o">=</span><span class="mi">3</span><span class="p">,</span> <span class="n">rollover</span><span class="o">=</span><span class="kc">False</span><span class="p">)</span> |
| <span class="gp">>>> </span><span class="nb">list</span><span class="p">(</span><span class="n">sampler</span><span class="p">)</span> |
| <span class="go">[0, 3, 6, 9, 12]</span> |
| </pre></div> |
| </div> |
| </dd></dl> |
| |
| <span class="target" id="module-mxnet.gluon.contrib.data.text"></span><p>Text datasets.</p> |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.data.text.WikiText2"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.contrib.data.text.</code><code class="sig-name descname">WikiText2</code><span class="sig-paren">(</span><em class="sig-param">root='/home/jenkins_slave/.mxnet/datasets/wikitext-2'</em>, <em class="sig-param">segment='train'</em>, <em class="sig-param">vocab=None</em>, <em class="sig-param">seq_len=35</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/data/text.html#WikiText2"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.data.text.WikiText2" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.contrib.data.text._WikiText</span></code></p> |
| <p>WikiText-2 word-level dataset for language modeling, from Salesforce research.</p> |
| <p>From |
| <a class="reference external" href="https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/">https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/</a></p> |
| <p>License: Creative Commons Attribution-ShareAlike</p> |
| <p>Each sample is a vector of length equal to the specified sequence length. |
| At the end of each sentence, an end-of-sentence token ‘<eos>’ is added.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>root</strong> (<em>str</em><em>, </em><em>default $MXNET_HOME/datasets/wikitext-2</em>) – Path to temp folder for storing data.</p></li> |
| <li><p><strong>segment</strong> (<em>str</em><em>, </em><em>default 'train'</em>) – Dataset segment. Options are ‘train’, ‘validation’, ‘test’.</p></li> |
| <li><p><strong>vocab</strong> (<code class="xref py py-class docutils literal notranslate"><span class="pre">Vocabulary</span></code>, default None) – The vocabulary to use for indexing the text dataset. |
| If None, a default vocabulary is created.</p></li> |
| <li><p><strong>seq_len</strong> (<em>int</em><em>, </em><em>default 35</em>) – The sequence length of each sample, regardless of the sentence boundary.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.data.text.WikiText103"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.contrib.data.text.</code><code class="sig-name descname">WikiText103</code><span class="sig-paren">(</span><em class="sig-param">root='/home/jenkins_slave/.mxnet/datasets/wikitext-103'</em>, <em class="sig-param">segment='train'</em>, <em class="sig-param">vocab=None</em>, <em class="sig-param">seq_len=35</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/data/text.html#WikiText103"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.data.text.WikiText103" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.contrib.data.text._WikiText</span></code></p> |
| <p>WikiText-103 word-level dataset for language modeling, from Salesforce research.</p> |
| <p>From |
| <a class="reference external" href="https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/">https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/</a></p> |
| <p>License: Creative Commons Attribution-ShareAlike</p> |
| <p>Each sample is a vector of length equal to the specified sequence length. |
| At the end of each sentence, an end-of-sentence token ‘<eos>’ is added.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>root</strong> (<em>str</em><em>, </em><em>default $MXNET_HOME/datasets/wikitext-103</em>) – Path to temp folder for storing data.</p></li> |
| <li><p><strong>segment</strong> (<em>str</em><em>, </em><em>default 'train'</em>) – Dataset segment. Options are ‘train’, ‘validation’, ‘test’.</p></li> |
| <li><p><strong>vocab</strong> (<code class="xref py py-class docutils literal notranslate"><span class="pre">Vocabulary</span></code>, default None) – The vocabulary to use for indexing the text dataset. |
| If None, a default vocabulary is created.</p></li> |
| <li><p><strong>seq_len</strong> (<em>int</em><em>, </em><em>default 35</em>) – The sequence length of each sample, regardless of the sentence boundary.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <span class="target" id="module-mxnet.gluon.contrib.estimator"></span><p>Gluon Estimator Module</p> |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.estimator.BatchProcessor"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.contrib.estimator.</code><code class="sig-name descname">BatchProcessor</code><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/estimator/batch_processor.html#BatchProcessor"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.estimator.BatchProcessor" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">object</span></code></p> |
| <p>BatchProcessor Class for plug and play fit_batch & evaluate_batch</p> |
| <p>During training or validation, data are divided into minibatches for processing. This |
| class aims at providing hooks of training or validating on a minibatch of data. Users |
| may provide customized fit_batch() and evaluate_batch() methods by inheriting from |
| this class and overriding class methods.</p> |
| <p><a class="reference internal" href="#mxnet.gluon.contrib.estimator.BatchProcessor" title="mxnet.gluon.contrib.estimator.BatchProcessor"><code class="xref py py-class docutils literal notranslate"><span class="pre">BatchProcessor</span></code></a> can be used to replace fit_batch() and evaluate_batch() |
| in the base estimator class</p> |
| <dl class="method"> |
| <dt id="mxnet.gluon.contrib.estimator.BatchProcessor.evaluate_batch"> |
| <code class="sig-name descname">evaluate_batch</code><span class="sig-paren">(</span><em class="sig-param">estimator</em>, <em class="sig-param">val_batch</em>, <em class="sig-param">batch_axis=0</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/estimator/batch_processor.html#BatchProcessor.evaluate_batch"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.estimator.BatchProcessor.evaluate_batch" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Evaluate the estimator model on a batch of validation data.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>estimator</strong> (<a class="reference internal" href="#mxnet.gluon.contrib.estimator.Estimator" title="mxnet.gluon.contrib.estimator.Estimator"><em>Estimator</em></a>) – Reference to the estimator</p></li> |
| <li><p><strong>val_batch</strong> (<em>tuple</em>) – Data and label of a batch from the validation data loader.</p></li> |
| <li><p><strong>batch_axis</strong> (<em>int</em><em>, </em><em>default 0</em>) – Batch axis to split the validation data into devices.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.contrib.estimator.BatchProcessor.fit_batch"> |
| <code class="sig-name descname">fit_batch</code><span class="sig-paren">(</span><em class="sig-param">estimator</em>, <em class="sig-param">train_batch</em>, <em class="sig-param">batch_axis=0</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/estimator/batch_processor.html#BatchProcessor.fit_batch"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.estimator.BatchProcessor.fit_batch" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Trains the estimator model on a batch of training data.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>estimator</strong> (<a class="reference internal" href="#mxnet.gluon.contrib.estimator.Estimator" title="mxnet.gluon.contrib.estimator.Estimator"><em>Estimator</em></a>) – Reference to the estimator</p></li> |
| <li><p><strong>train_batch</strong> (<em>tuple</em>) – Data and label of a batch from the training data loader.</p></li> |
| <li><p><strong>batch_axis</strong> (<em>int</em><em>, </em><em>default 0</em>) – Batch axis to split the training data into devices.</p></li> |
| </ul> |
| </dd> |
| <dt class="field-even">Returns</dt> |
| <dd class="field-even"><p><ul class="simple"> |
| <li><p><strong>data</strong> (<em>List of NDArray</em>) – Sharded data from the batch. Data is sharded with |
| <cite>gluon.split_and_load</cite>.</p></li> |
| <li><p><strong>label</strong> (<em>List of NDArray</em>) – Sharded label from the batch. Labels are sharded with |
| <cite>gluon.split_and_load</cite>.</p></li> |
| <li><p><strong>pred</strong> (<em>List of NDArray</em>) – Prediction on each of the sharded inputs.</p></li> |
| <li><p><strong>loss</strong> (<em>List of NDArray</em>) – Loss on each of the sharded inputs.</p></li> |
| </ul> |
| </p> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.estimator.CheckpointHandler"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.contrib.estimator.</code><code class="sig-name descname">CheckpointHandler</code><span class="sig-paren">(</span><em class="sig-param">model_dir</em>, <em class="sig-param">model_prefix='model'</em>, <em class="sig-param">monitor=None</em>, <em class="sig-param">verbose=0</em>, <em class="sig-param">save_best=False</em>, <em class="sig-param">mode='auto'</em>, <em class="sig-param">epoch_period=1</em>, <em class="sig-param">batch_period=None</em>, <em class="sig-param">max_checkpoints=5</em>, <em class="sig-param">resume_from_checkpoint=False</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/estimator/event_handler.html#CheckpointHandler"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.estimator.CheckpointHandler" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.contrib.estimator.event_handler.TrainBegin</span></code>, <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.contrib.estimator.event_handler.BatchEnd</span></code>, <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.contrib.estimator.event_handler.EpochEnd</span></code></p> |
| <p>Save the model after user define period</p> |
| <p><a class="reference internal" href="#mxnet.gluon.contrib.estimator.CheckpointHandler" title="mxnet.gluon.contrib.estimator.CheckpointHandler"><code class="xref py py-class docutils literal notranslate"><span class="pre">CheckpointHandler</span></code></a> saves the network architecture after first batch if the model |
| can be fully hybridized, saves model parameters and trainer states after user defined period, |
| default saves every epoch.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>model_dir</strong> (<em>str</em>) – File directory to save all the model related files including model architecture, |
| model parameters, and trainer states.</p></li> |
| <li><p><strong>model_prefix</strong> (<em>str default 'model'</em>) – Prefix to add for all checkpoint file names.</p></li> |
| <li><p><strong>monitor</strong> (<a class="reference internal" href="../../metric/index.html#mxnet.metric.EvalMetric" title="mxnet.metric.EvalMetric"><em>EvalMetric</em></a><em>, </em><em>default None</em>) – The metrics to monitor and determine if model has improved</p></li> |
| <li><p><strong>verbose</strong> (<em>int</em><em>, </em><em>default 0</em>) – Verbosity mode, 1 means inform user every time a checkpoint is saved</p></li> |
| <li><p><strong>save_best</strong> (<em>bool</em><em>, </em><em>default False</em>) – If True, monitor must not be None, <a class="reference internal" href="#mxnet.gluon.contrib.estimator.CheckpointHandler" title="mxnet.gluon.contrib.estimator.CheckpointHandler"><code class="xref py py-class docutils literal notranslate"><span class="pre">CheckpointHandler</span></code></a> will save the |
| model parameters and trainer states with the best monitored value.</p></li> |
| <li><p><strong>mode</strong> (<em>str</em><em>, </em><em>default 'auto'</em>) – One of {auto, min, max}, if <cite>save_best=True</cite>, the comparison to make |
| and determine if the monitored value has improved. if ‘auto’ mode, |
| <a class="reference internal" href="#mxnet.gluon.contrib.estimator.CheckpointHandler" title="mxnet.gluon.contrib.estimator.CheckpointHandler"><code class="xref py py-class docutils literal notranslate"><span class="pre">CheckpointHandler</span></code></a> will try to use min or max based on |
| the monitored metric name.</p></li> |
| <li><p><strong>epoch_period</strong> (<em>int</em><em>, </em><em>default 1</em>) – Epoch intervals between saving the network. By default, checkpoints are |
| saved every epoch.</p></li> |
| <li><p><strong>batch_period</strong> (<em>int</em><em>, </em><em>default None</em>) – Batch intervals between saving the network. |
| By default, checkpoints are not saved based on the number of batches.</p></li> |
| <li><p><strong>max_checkpoints</strong> (<em>int</em><em>, </em><em>default 5</em>) – Maximum number of checkpoint files to keep in the model_dir, older checkpoints |
| will be removed. Best checkpoint file is not counted.</p></li> |
| <li><p><strong>resume_from_checkpoint</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to resume training from checkpoint in model_dir. If True and checkpoints |
| found, <a class="reference internal" href="#mxnet.gluon.contrib.estimator.CheckpointHandler" title="mxnet.gluon.contrib.estimator.CheckpointHandler"><code class="xref py py-class docutils literal notranslate"><span class="pre">CheckpointHandler</span></code></a> will load net parameters and trainer states, |
| and train the remaining of epochs and batches.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.estimator.EarlyStoppingHandler"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.contrib.estimator.</code><code class="sig-name descname">EarlyStoppingHandler</code><span class="sig-paren">(</span><em class="sig-param">monitor</em>, <em class="sig-param">min_delta=0</em>, <em class="sig-param">patience=0</em>, <em class="sig-param">mode='auto'</em>, <em class="sig-param">baseline=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/estimator/event_handler.html#EarlyStoppingHandler"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.estimator.EarlyStoppingHandler" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.contrib.estimator.event_handler.TrainBegin</span></code>, <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.contrib.estimator.event_handler.EpochEnd</span></code>, <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.contrib.estimator.event_handler.TrainEnd</span></code></p> |
| <p>Early stop training if monitored value is not improving</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>monitor</strong> (<a class="reference internal" href="../../metric/index.html#mxnet.metric.EvalMetric" title="mxnet.metric.EvalMetric"><em>EvalMetric</em></a>) – The metric to monitor, and stop training if this metric does not improve.</p></li> |
| <li><p><strong>min_delta</strong> (<em>float</em><em>, </em><em>default 0</em>) – Minimal change in monitored value to be considered as an improvement.</p></li> |
| <li><p><strong>patience</strong> (<em>int</em><em>, </em><em>default 0</em>) – Number of epochs to wait for improvement before terminate training.</p></li> |
| <li><p><strong>mode</strong> (<em>str</em><em>, </em><em>default 'auto'</em>) – One of {auto, min, max}, if <cite>save_best_only=True</cite>, the comparison to make |
| and determine if the monitored value has improved. if ‘auto’ mode, checkpoint |
| handler will try to use min or max based on the monitored metric name.</p></li> |
| <li><p><strong>baseline</strong> (<em>float</em>) – Baseline value to compare the monitored value with.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.estimator.Estimator"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.contrib.estimator.</code><code class="sig-name descname">Estimator</code><span class="sig-paren">(</span><em class="sig-param">net</em>, <em class="sig-param">loss</em>, <em class="sig-param">train_metrics=None</em>, <em class="sig-param">val_metrics=None</em>, <em class="sig-param">initializer=None</em>, <em class="sig-param">trainer=None</em>, <em class="sig-param">context=None</em>, <em class="sig-param">val_net=None</em>, <em class="sig-param">val_loss=None</em>, <em class="sig-param">batch_processor=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/estimator/estimator.html#Estimator"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.estimator.Estimator" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">object</span></code></p> |
| <p>Estimator Class for easy model training</p> |
| <p><a class="reference internal" href="#mxnet.gluon.contrib.estimator.Estimator" title="mxnet.gluon.contrib.estimator.Estimator"><code class="xref py py-class docutils literal notranslate"><span class="pre">Estimator</span></code></a> can be used to facilitate the training & validation process</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>net</strong> (<a class="reference internal" href="../block.html#mxnet.gluon.Block" title="mxnet.gluon.Block"><em>gluon.Block</em></a>) – The model used for training.</p></li> |
| <li><p><strong>loss</strong> (<a class="reference internal" href="../loss/index.html#mxnet.gluon.loss.Loss" title="mxnet.gluon.loss.Loss"><em>gluon.loss.Loss</em></a>) – Loss (objective) function to calculate during training.</p></li> |
| <li><p><strong>train_metrics</strong> (<a class="reference internal" href="../../metric/index.html#mxnet.metric.EvalMetric" title="mxnet.metric.EvalMetric"><em>EvalMetric</em></a><em> or </em><em>list of EvalMetric</em>) – Training metrics for evaluating models on training dataset.</p></li> |
| <li><p><strong>val_metrics</strong> (<a class="reference internal" href="../../metric/index.html#mxnet.metric.EvalMetric" title="mxnet.metric.EvalMetric"><em>EvalMetric</em></a><em> or </em><em>list of EvalMetric</em>) – Validation metrics for evaluating models on validation dataset.</p></li> |
| <li><p><strong>initializer</strong> (<a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Initializer to initialize the network.</p></li> |
| <li><p><strong>trainer</strong> (<a class="reference internal" href="../trainer.html#mxnet.gluon.Trainer" title="mxnet.gluon.Trainer"><em>Trainer</em></a>) – Trainer to apply optimizer on network parameters.</p></li> |
| <li><p><strong>context</strong> (<a class="reference internal" href="../../mxnet/context/index.html#mxnet.context.Context" title="mxnet.context.Context"><em>Context</em></a><em> or </em><em>list of Context</em>) – Device(s) to run the training on.</p></li> |
| <li><p><strong>val_net</strong> (<a class="reference internal" href="../block.html#mxnet.gluon.Block" title="mxnet.gluon.Block"><em>gluon.Block</em></a>) – <p>The model used for validation. The validation model does not necessarily belong to |
| the same model class as the training model. But the two models typically share the |
| same architecture. Therefore the validation model can reuse parameters of the |
| training model.</p> |
| <p>The code example of consruction of val_net sharing the same network parameters as |
| the training net is given below:</p> |
| <div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="n">net</span> <span class="o">=</span> <span class="n">_get_train_network</span><span class="p">()</span> |
| <span class="gp">>>> </span><span class="n">val_net</span> <span class="o">=</span> <span class="n">_get_test_network</span><span class="p">(</span><span class="n">params</span><span class="o">=</span><span class="n">net</span><span class="o">.</span><span class="n">collect_params</span><span class="p">())</span> |
| <span class="gp">>>> </span><span class="n">net</span><span class="o">.</span><span class="n">initialize</span><span class="p">(</span><span class="n">ctx</span><span class="o">=</span><span class="n">ctx</span><span class="p">)</span> |
| <span class="gp">>>> </span><span class="n">est</span> <span class="o">=</span> <span class="n">Estimator</span><span class="p">(</span><span class="n">net</span><span class="p">,</span> <span class="n">loss</span><span class="p">,</span> <span class="n">val_net</span><span class="o">=</span><span class="n">val_net</span><span class="p">)</span> |
| </pre></div> |
| </div> |
| <p>Proper namespace match is required for weight sharing between two networks. Most networks |
| inheriting <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> can share their parameters correctly. An exception is |
| Sequential networks that Block scope must be specified for correct weight sharing. For |
| the naming in mxnet Gluon API, please refer to the site |
| (<a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/naming.html">https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/naming.html</a>) |
| for future information.</p> |
| </p></li> |
| <li><p><strong>val_loss</strong> (<em>gluon.loss.loss</em>) – Loss (objective) function to calculate during validation. If set val_loss |
| None, it will use the same loss function as self.loss</p></li> |
| <li><p><strong>batch_processor</strong> (<a class="reference internal" href="#mxnet.gluon.contrib.estimator.BatchProcessor" title="mxnet.gluon.contrib.estimator.BatchProcessor"><em>BatchProcessor</em></a>) – BatchProcessor provides customized fit_batch() and evaluate_batch() methods</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="method"> |
| <dt id="mxnet.gluon.contrib.estimator.Estimator.evaluate"> |
| <code class="sig-name descname">evaluate</code><span class="sig-paren">(</span><em class="sig-param">val_data</em>, <em class="sig-param">batch_axis=0</em>, <em class="sig-param">event_handlers=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/estimator/estimator.html#Estimator.evaluate"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.estimator.Estimator.evaluate" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Evaluate model on validation data.</p> |
| <p>This function calls <code class="xref py py-func docutils literal notranslate"><span class="pre">evaluate_batch()</span></code> on each of the batches from the |
| validation data loader. Thus, for custom use cases, it’s possible to inherit the |
| estimator class and override <code class="xref py py-func docutils literal notranslate"><span class="pre">evaluate_batch()</span></code>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>val_data</strong> (<a class="reference internal" href="../data/index.html#mxnet.gluon.data.DataLoader" title="mxnet.gluon.data.DataLoader"><em>DataLoader</em></a>) – Validation data loader with data and labels.</p></li> |
| <li><p><strong>batch_axis</strong> (<em>int</em><em>, </em><em>default 0</em>) – Batch axis to split the validation data into devices.</p></li> |
| <li><p><strong>event_handlers</strong> (<em>EventHandler</em><em> or </em><em>list of EventHandler</em>) – List of <code class="xref py py-class docutils literal notranslate"><span class="pre">EventHandlers</span></code> to apply during validation. Besides |
| event handlers specified here, a default MetricHandler and a LoggingHandler |
| will be added if not specified explicitly.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="method"> |
| <dt id="mxnet.gluon.contrib.estimator.Estimator.fit"> |
| <code class="sig-name descname">fit</code><span class="sig-paren">(</span><em class="sig-param">train_data</em>, <em class="sig-param">val_data=None</em>, <em class="sig-param">epochs=None</em>, <em class="sig-param">event_handlers=None</em>, <em class="sig-param">batches=None</em>, <em class="sig-param">batch_axis=0</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/estimator/estimator.html#Estimator.fit"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.estimator.Estimator.fit" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Trains the model with a given <code class="xref py py-class docutils literal notranslate"><span class="pre">DataLoader</span></code> for a specified |
| number of epochs or batches. The batch size is inferred from the |
| data loader’s batch_size.</p> |
| <p>This function calls <code class="xref py py-func docutils literal notranslate"><span class="pre">fit_batch()</span></code> on each of the batches from the |
| training data loader. Thus, for custom use cases, it’s possible to inherit the |
| estimator class and override <code class="xref py py-func docutils literal notranslate"><span class="pre">fit_batch()</span></code>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>train_data</strong> (<a class="reference internal" href="../data/index.html#mxnet.gluon.data.DataLoader" title="mxnet.gluon.data.DataLoader"><em>DataLoader</em></a>) – Training data loader with data and labels.</p></li> |
| <li><p><strong>val_data</strong> (<a class="reference internal" href="../data/index.html#mxnet.gluon.data.DataLoader" title="mxnet.gluon.data.DataLoader"><em>DataLoader</em></a><em>, </em><em>default None</em>) – Validation data loader with data and labels.</p></li> |
| <li><p><strong>epochs</strong> (<em>int</em><em>, </em><em>default None</em>) – Number of epochs to iterate on the training data. |
| You can only specify one and only one type of iteration(epochs or batches).</p></li> |
| <li><p><strong>event_handlers</strong> (<em>EventHandler</em><em> or </em><em>list of EventHandler</em>) – List of <code class="xref py py-class docutils literal notranslate"><span class="pre">EventHandlers</span></code> to apply during training. Besides |
| the event handlers specified here, a StoppingHandler, |
| LoggingHandler and MetricHandler will be added by default if not |
| yet specified manually. If validation data is provided, a |
| ValidationHandler is also added if not already specified.</p></li> |
| <li><p><strong>batches</strong> (<em>int</em><em>, </em><em>default None</em>) – Number of batches to iterate on the training data. |
| You can only specify one and only one type of iteration(epochs or batches).</p></li> |
| <li><p><strong>batch_axis</strong> (<em>int</em><em>, </em><em>default 0</em>) – Batch axis to split the training data into devices.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="attribute"> |
| <dt id="mxnet.gluon.contrib.estimator.Estimator.logger"> |
| <code class="sig-name descname">logger</code><em class="property"> = None</em><a class="headerlink" href="#mxnet.gluon.contrib.estimator.Estimator.logger" title="Permalink to this definition">¶</a></dt> |
| <dd><p>logging.Logger object associated with the Estimator.</p> |
| <p>The logger is used for all logs generated by this estimator and its |
| handlers. A new logging.Logger is created during Estimator construction and |
| configured to write all logs with level logging.INFO or higher to |
| sys.stdout.</p> |
| <p>You can modify the logging settings using the standard Python methods. For |
| example, to save logs to a file in addition to printing them to stdout |
| output, you can attach a logging.FileHandler to the logger.</p> |
| <div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="n">est</span> <span class="o">=</span> <span class="n">Estimator</span><span class="p">(</span><span class="n">net</span><span class="p">,</span> <span class="n">loss</span><span class="p">)</span> |
| <span class="gp">>>> </span><span class="kn">import</span> <span class="nn">logging</span> |
| <span class="gp">>>> </span><span class="n">est</span><span class="o">.</span><span class="n">logger</span><span class="o">.</span><span class="n">addHandler</span><span class="p">(</span><span class="n">logging</span><span class="o">.</span><span class="n">FileHandler</span><span class="p">(</span><span class="n">filename</span><span class="p">))</span> |
| </pre></div> |
| </div> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.estimator.GradientUpdateHandler"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.contrib.estimator.</code><code class="sig-name descname">GradientUpdateHandler</code><span class="sig-paren">(</span><em class="sig-param">priority=-2000</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/estimator/event_handler.html#GradientUpdateHandler"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.estimator.GradientUpdateHandler" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.contrib.estimator.event_handler.BatchEnd</span></code></p> |
| <p>Gradient Update Handler that apply gradients on network weights</p> |
| <p><a class="reference internal" href="#mxnet.gluon.contrib.estimator.GradientUpdateHandler" title="mxnet.gluon.contrib.estimator.GradientUpdateHandler"><code class="xref py py-class docutils literal notranslate"><span class="pre">GradientUpdateHandler</span></code></a> takes the priority level. It updates weight parameters |
| at the end of each batch</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>priority</strong> (<em>scalar</em><em>, </em><em>default -2000</em>) – priority level of the gradient update handler. Priority level is sorted in ascending |
| order. The lower the number is, the higher priority level the handler is.</p></li> |
| <li><p><strong>----------</strong> – </p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.estimator.LoggingHandler"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.contrib.estimator.</code><code class="sig-name descname">LoggingHandler</code><span class="sig-paren">(</span><em class="sig-param">log_interval='epoch'</em>, <em class="sig-param">metrics=None</em>, <em class="sig-param">priority=inf</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/estimator/event_handler.html#LoggingHandler"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.estimator.LoggingHandler" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.contrib.estimator.event_handler.TrainBegin</span></code>, <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.contrib.estimator.event_handler.TrainEnd</span></code>, <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.contrib.estimator.event_handler.EpochBegin</span></code>, <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.contrib.estimator.event_handler.EpochEnd</span></code>, <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.contrib.estimator.event_handler.BatchBegin</span></code>, <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.contrib.estimator.event_handler.BatchEnd</span></code></p> |
| <p>Basic Logging Handler that applies to every Gluon estimator by default.</p> |
| <p><a class="reference internal" href="#mxnet.gluon.contrib.estimator.LoggingHandler" title="mxnet.gluon.contrib.estimator.LoggingHandler"><code class="xref py py-class docutils literal notranslate"><span class="pre">LoggingHandler</span></code></a> logs hyper-parameters, training statistics, |
| and other useful information during training</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>log_interval</strong> (<em>int</em><em> or </em><em>str</em><em>, </em><em>default 'epoch'</em>) – Logging interval during training. |
| log_interval=’epoch’: display metrics every epoch |
| log_interval=integer k: display metrics every interval of k batches</p></li> |
| <li><p><strong>metrics</strong> (<em>list of EvalMetrics</em>) – Metrics to be logged, logged at batch end, epoch end, train end.</p></li> |
| <li><p><strong>priority</strong> (<em>scalar</em><em>, </em><em>default np.Inf</em>) – Priority level of the LoggingHandler. Priority level is sorted in |
| ascending order. The lower the number is, the higher priority level the |
| handler is.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.estimator.MetricHandler"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.contrib.estimator.</code><code class="sig-name descname">MetricHandler</code><span class="sig-paren">(</span><em class="sig-param">metrics</em>, <em class="sig-param">priority=-1000</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/estimator/event_handler.html#MetricHandler"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.estimator.MetricHandler" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.contrib.estimator.event_handler.EpochBegin</span></code>, <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.contrib.estimator.event_handler.BatchEnd</span></code></p> |
| <p>Metric Handler that update metric values at batch end</p> |
| <p><a class="reference internal" href="#mxnet.gluon.contrib.estimator.MetricHandler" title="mxnet.gluon.contrib.estimator.MetricHandler"><code class="xref py py-class docutils literal notranslate"><span class="pre">MetricHandler</span></code></a> takes model predictions and true labels |
| and update the metrics, it also update metric wrapper for loss with loss values. |
| Validation loss and metrics will be handled by <a class="reference internal" href="#mxnet.gluon.contrib.estimator.ValidationHandler" title="mxnet.gluon.contrib.estimator.ValidationHandler"><code class="xref py py-class docutils literal notranslate"><span class="pre">ValidationHandler</span></code></a></p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>metrics</strong> (<em>List of EvalMetrics</em>) – Metrics to be updated at batch end.</p></li> |
| <li><p><strong>priority</strong> (<em>scalar</em>) – Priority level of the MetricHandler. Priority level is sorted in ascending |
| order. The lower the number is, the higher priority level the handler is.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.estimator.StoppingHandler"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.contrib.estimator.</code><code class="sig-name descname">StoppingHandler</code><span class="sig-paren">(</span><em class="sig-param">max_epoch=None</em>, <em class="sig-param">max_batch=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/estimator/event_handler.html#StoppingHandler"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.estimator.StoppingHandler" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.contrib.estimator.event_handler.TrainBegin</span></code>, <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.contrib.estimator.event_handler.BatchEnd</span></code>, <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.contrib.estimator.event_handler.EpochEnd</span></code></p> |
| <p>Stop conditions to stop training |
| Stop training if maximum number of batches or epochs |
| reached.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>max_epoch</strong> (<em>int</em><em>, </em><em>default None</em>) – Number of maximum epochs to train.</p></li> |
| <li><p><strong>max_batch</strong> (<em>int</em><em>, </em><em>default None</em>) – Number of maximum batches to train.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.estimator.ValidationHandler"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.contrib.estimator.</code><code class="sig-name descname">ValidationHandler</code><span class="sig-paren">(</span><em class="sig-param">val_data</em>, <em class="sig-param">eval_fn</em>, <em class="sig-param">epoch_period=1</em>, <em class="sig-param">batch_period=None</em>, <em class="sig-param">priority=-1000</em>, <em class="sig-param">event_handlers=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/estimator/event_handler.html#ValidationHandler"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.estimator.ValidationHandler" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.contrib.estimator.event_handler.TrainBegin</span></code>, <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.contrib.estimator.event_handler.BatchEnd</span></code>, <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.contrib.estimator.event_handler.EpochEnd</span></code></p> |
| <p>Validation Handler that evaluate model on validation dataset</p> |
| <p><a class="reference internal" href="#mxnet.gluon.contrib.estimator.ValidationHandler" title="mxnet.gluon.contrib.estimator.ValidationHandler"><code class="xref py py-class docutils literal notranslate"><span class="pre">ValidationHandler</span></code></a> takes validation dataset, an evaluation function, |
| metrics to be evaluated, and how often to run the validation. You can provide custom |
| evaluation function or use the one provided my <a class="reference internal" href="#mxnet.gluon.contrib.estimator.Estimator" title="mxnet.gluon.contrib.estimator.Estimator"><code class="xref py py-class docutils literal notranslate"><span class="pre">Estimator</span></code></a></p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>val_data</strong> (<a class="reference internal" href="../data/index.html#mxnet.gluon.data.DataLoader" title="mxnet.gluon.data.DataLoader"><em>DataLoader</em></a>) – Validation data set to run evaluation.</p></li> |
| <li><p><strong>eval_fn</strong> (<em>function</em>) – A function defines how to run evaluation and |
| calculate loss and metrics.</p></li> |
| <li><p><strong>epoch_period</strong> (<em>int</em><em>, </em><em>default 1</em>) – How often to run validation at epoch end, by default |
| <a class="reference internal" href="#mxnet.gluon.contrib.estimator.ValidationHandler" title="mxnet.gluon.contrib.estimator.ValidationHandler"><code class="xref py py-class docutils literal notranslate"><span class="pre">ValidationHandler</span></code></a> validate every epoch.</p></li> |
| <li><p><strong>batch_period</strong> (<em>int</em><em>, </em><em>default None</em>) – How often to run validation at batch end, by default |
| <a class="reference internal" href="#mxnet.gluon.contrib.estimator.ValidationHandler" title="mxnet.gluon.contrib.estimator.ValidationHandler"><code class="xref py py-class docutils literal notranslate"><span class="pre">ValidationHandler</span></code></a> does not validate at batch end.</p></li> |
| <li><p><strong>priority</strong> (<em>scalar</em><em>, </em><em>default -1000</em>) – Priority level of the ValidationHandler. Priority level is sorted in |
| ascending order. The lower the number is, the higher priority level the |
| handler is.</p></li> |
| <li><p><strong>event_handlers</strong> (<em>EventHandler</em><em> or </em><em>list of EventHandlers</em>) – List of <code class="xref py py-class docutils literal notranslate"><span class="pre">EventHandler</span></code> to apply during validaiton. This argument |
| is used by self.eval_fn function in order to process customized event |
| handlers.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </div> |
| </div> |
| |
| |
| <hr class="feedback-hr-top" /> |
| <div class="feedback-container"> |
| <div class="feedback-question">Did this page help you?</div> |
| <div class="feedback-answer-container"> |
| <div class="feedback-answer yes-link" data-response="yes">Yes</div> |
| <div class="feedback-answer no-link" data-response="no">No</div> |
| </div> |
| <div class="feedback-thank-you">Thanks for your feedback!</div> |
| </div> |
| <hr class="feedback-hr-bottom" /> |
| </div> |
| <div class="side-doc-outline"> |
| <div class="side-doc-outline--content"> |
| <div class="localtoc"> |
| <p class="caption"> |
| <span class="caption-text">Table Of Contents</span> |
| </p> |
| <ul> |
| <li><a class="reference internal" href="#">gluon.contrib</a><ul> |
| <li><a class="reference internal" href="#neural-network">Neural Network</a></li> |
| <li><a class="reference internal" href="#convolutional-neural-network">Convolutional Neural Network</a></li> |
| <li><a class="reference internal" href="#recurrent-neural-network">Recurrent Neural Network</a></li> |
| <li><a class="reference internal" href="#data">Data</a></li> |
| <li><a class="reference internal" href="#text-dataset">Text Dataset</a></li> |
| <li><a class="reference internal" href="#estimator">Estimator</a></li> |
| <li><a class="reference internal" href="#event-handler">Event Handler</a></li> |
| <li><a class="reference internal" href="#module-mxnet.gluon.contrib">API Reference</a></li> |
| </ul> |
| </li> |
| </ul> |
| |
| </div> |
| </div> |
| </div> |
| |
| <div class="clearer"></div> |
| </div><div class="pagenation"> |
| <a id="button-prev" href="../trainer.html" class="mdl-button mdl-js-button mdl-js-ripple-effect mdl-button--colored" role="botton" accesskey="P"> |
| <i class="pagenation-arrow-L fas fa-arrow-left fa-lg"></i> |
| <div class="pagenation-text"> |
| <span class="pagenation-direction">Previous</span> |
| <div>gluon.Trainer</div> |
| </div> |
| </a> |
| <a id="button-next" href="../data/index.html" class="mdl-button mdl-js-button mdl-js-ripple-effect mdl-button--colored" role="botton" accesskey="N"> |
| <i class="pagenation-arrow-R fas fa-arrow-right fa-lg"></i> |
| <div class="pagenation-text"> |
| <span class="pagenation-direction">Next</span> |
| <div>gluon.data</div> |
| </div> |
| </a> |
| </div> |
| <footer class="site-footer h-card"> |
| <div class="wrapper"> |
| <div class="row"> |
| <div class="col-4"> |
| <h4 class="footer-category-title">Resources</h4> |
| <ul class="contact-list"> |
| <li><a class="u-email" href="mailto:dev@mxnet.apache.org">Dev list</a></li> |
| <li><a class="u-email" href="mailto:user@mxnet.apache.org">User mailing list</a></li> |
| <li><a href="https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+Home">Developer Wiki</a></li> |
| <li><a href="https://issues.apache.org/jira/projects/MXNET/issues">Jira Tracker</a></li> |
| <li><a href="https://github.com/apache/incubator-mxnet/labels/Roadmap">Github Roadmap</a></li> |
| <li><a href="https://discuss.mxnet.io">MXNet Discuss forum</a></li> |
| <li><a href="/community/contribute">Contribute To MXNet</a></li> |
| |
| </ul> |
| </div> |
| |
| <div class="col-4"><ul class="social-media-list"><li><a href="https://github.com/apache/incubator-mxnet"><svg class="svg-icon"><use xlink:href="../../../_static/minima-social-icons.svg#github"></use></svg> <span class="username">apache/incubator-mxnet</span></a></li><li><a href="https://www.twitter.com/apachemxnet"><svg class="svg-icon"><use xlink:href="../../../_static/minima-social-icons.svg#twitter"></use></svg> <span class="username">apachemxnet</span></a></li><li><a href="https://youtube.com/apachemxnet"><svg class="svg-icon"><use xlink:href="../../../_static/minima-social-icons.svg#youtube"></use></svg> <span class="username">apachemxnet</span></a></li></ul> |
| </div> |
| |
| <div class="col-4 footer-text"> |
| <p>A flexible and efficient library for deep learning.</p> |
| </div> |
| </div> |
| </div> |
| </footer> |
| |
| <footer class="site-footer2"> |
| <div class="wrapper"> |
| <div class="row"> |
| <div class="col-3"> |
| <img src="../../../_static/apache_incubator_logo.png" class="footer-logo col-2"> |
| </div> |
| <div class="footer-bottom-warning col-9"> |
| <p>Apache MXNet is an effort undergoing incubation at The Apache Software Foundation (ASF), <span style="font-weight:bold">sponsored by the <i>Apache Incubator</i></span>. Incubation is required |
| of all newly accepted projects until a further review indicates that the infrastructure, |
| communications, and decision making process have stabilized in a manner consistent with other |
| successful ASF projects. While incubation status is not necessarily a reflection of the completeness |
| or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF. |
| </p><p>"Copyright © 2017-2018, The Apache Software Foundation Apache MXNet, MXNet, Apache, the Apache |
| feather, and the Apache MXNet project logo are either registered trademarks or trademarks of the |
| Apache Software Foundation."</p> |
| </div> |
| </div> |
| </div> |
| </footer> |
| |
| </body> |
| </html> |