| <!DOCTYPE html> |
| |
| <html lang="en"> |
| <head> |
| <meta charset="utf-8"/> |
| <meta content="IE=edge" http-equiv="X-UA-Compatible"/> |
| <meta content="width=device-width, initial-scale=1" name="viewport"/> |
| <title>Gluon Contrib API — mxnet documentation</title> |
| <link crossorigin="anonymous" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css" integrity="sha384-1q8mTJOASx8j1Au+a5WDVnPi2lkFfwwEAa8hDDdjZlpLegxhjVME1fgjWPGmkzs7" rel="stylesheet"/> |
| <link href="https://maxcdn.bootstrapcdn.com/font-awesome/4.5.0/css/font-awesome.min.css" rel="stylesheet"/> |
| <link href="../../../_static/basic.css" rel="stylesheet" type="text/css"> |
| <link href="../../../_static/pygments.css" rel="stylesheet" type="text/css"> |
| <link href="../../../_static/mxnet.css" rel="stylesheet" type="text/css"/> |
| <script type="text/javascript"> |
| var DOCUMENTATION_OPTIONS = { |
| URL_ROOT: '../../../', |
| VERSION: '', |
| COLLAPSE_INDEX: false, |
| FILE_SUFFIX: '.html', |
| HAS_SOURCE: true, |
| SOURCELINK_SUFFIX: '' |
| }; |
| </script> |
| <script src="https://code.jquery.com/jquery-1.11.1.min.js" type="text/javascript"></script> |
| <script src="../../../_static/underscore.js" type="text/javascript"></script> |
| <script src="../../../_static/searchtools_custom.js" type="text/javascript"></script> |
| <script src="../../../_static/doctools.js" type="text/javascript"></script> |
| <script src="../../../_static/selectlang.js" type="text/javascript"></script> |
| <script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"></script> |
| <script type="text/javascript"> jQuery(function() { Search.loadIndex("/searchindex.js"); Search.init();}); </script> |
| <script> |
| (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ |
| (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new |
| Date();a=s.createElement(o), |
| m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) |
| })(window,document,'script','https://www.google-analytics.com/analytics.js','ga'); |
| |
| ga('create', 'UA-96378503-1', 'auto'); |
| ga('send', 'pageview'); |
| |
| </script> |
| <!-- --> |
| <!-- <script type="text/javascript" src="../../../_static/jquery.js"></script> --> |
| <!-- --> |
| <!-- <script type="text/javascript" src="../../../_static/underscore.js"></script> --> |
| <!-- --> |
| <!-- <script type="text/javascript" src="../../../_static/doctools.js"></script> --> |
| <!-- --> |
| <!-- <script type="text/javascript" src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script> --> |
| <!-- --> |
| <link href="gluon.html" rel="up" title="Gluon Package"> |
| <link href="../kvstore/kvstore.html" rel="next" title="KVStore API"/> |
| <link href="model_zoo.html" rel="prev" title="Gluon Model Zoo"/> |
| <link href="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/image/mxnet-icon.png" rel="icon" type="image/png"/> |
| </link></link></link></head> |
| <body background="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/image/mxnet-background-compressed.jpeg" role="document"> |
| <div class="content-block"><div class="navbar navbar-fixed-top"> |
| <div class="container" id="navContainer"> |
| <div class="innder" id="header-inner"> |
| <h1 id="logo-wrap"> |
| <a href="../../../" id="logo"><img src="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/image/mxnet_logo.png"/></a> |
| </h1> |
| <nav class="nav-bar" id="main-nav"> |
| <a class="main-nav-link" href="../../../install/index.html">Install</a> |
| <a class="main-nav-link" href="../../../tutorials/index.html">Tutorials</a> |
| <span id="dropdown-menu-position-anchor"> |
| <a aria-expanded="true" aria-haspopup="true" class="main-nav-link dropdown-toggle" data-toggle="dropdown" href="#" role="button">Gluon <span class="caret"></span></a> |
| <ul class="dropdown-menu navbar-menu" id="package-dropdown-menu"> |
| <li><a class="main-nav-link" href="../../../gluon/index.html">About</a></li> |
| <li><a class="main-nav-link" href="http://gluon.mxnet.io">Tutorials</a></li> |
| </ul> |
| </span> |
| <span id="dropdown-menu-position-anchor"> |
| <a aria-expanded="true" aria-haspopup="true" class="main-nav-link dropdown-toggle" data-toggle="dropdown" href="#" role="button">API <span class="caret"></span></a> |
| <ul class="dropdown-menu navbar-menu" id="package-dropdown-menu"> |
| <li><a class="main-nav-link" href="../../../api/python/index.html">Python</a></li> |
| <li><a class="main-nav-link" href="../../../api/scala/index.html">Scala</a></li> |
| <li><a class="main-nav-link" href="../../../api/r/index.html">R</a></li> |
| <li><a class="main-nav-link" href="../../../api/julia/index.html">Julia</a></li> |
| <li><a class="main-nav-link" href="../../../api/c++/index.html">C++</a></li> |
| <li><a class="main-nav-link" href="../../../api/perl/index.html">Perl</a></li> |
| </ul> |
| </span> |
| <span id="dropdown-menu-position-anchor-docs"> |
| <a aria-expanded="true" aria-haspopup="true" class="main-nav-link dropdown-toggle" data-toggle="dropdown" href="#" role="button">Docs <span class="caret"></span></a> |
| <ul class="dropdown-menu navbar-menu" id="package-dropdown-menu-docs"> |
| <li><a class="main-nav-link" href="../../../faq/index.html">FAQ</a></li> |
| <li><a class="main-nav-link" href="../../../architecture/index.html">Architecture</a></li> |
| <li><a class="main-nav-link" href="https://github.com/apache/incubator-mxnet/tree/1.0.0/example">Examples</a></li> |
| <li><a class="main-nav-link" href="../../../model_zoo/index.html">Model Zoo</a></li> |
| </ul> |
| </span> |
| <a class="main-nav-link" href="https://github.com/dmlc/mxnet">Github</a> |
| <span id="dropdown-menu-position-anchor-community"> |
| <a aria-expanded="true" aria-haspopup="true" class="main-nav-link dropdown-toggle" data-toggle="dropdown" href="#" role="button">Community <span class="caret"></span></a> |
| <ul class="dropdown-menu navbar-menu" id="package-dropdown-menu-community"> |
| <li><a class="main-nav-link" href="../../../community/index.html">Community</a></li> |
| <li><a class="main-nav-link" href="../../../community/contribute.html">Contribute</a></li> |
| <li><a class="main-nav-link" href="../../../community/powered_by.html">Powered By</a></li> |
| </ul> |
| </span> |
| <a class="main-nav-link" href="http://discuss.mxnet.io">Discuss</a> |
| <span id="dropdown-menu-position-anchor-version" style="position: relative"><a href="#" class="main-nav-link dropdown-toggle" data-toggle="dropdown" role="button" aria-haspopup="true" aria-expanded="true">Versions(1.0.0)<span class="caret"></span></a><ul id="package-dropdown-menu" class="dropdown-menu"><li><a class="main-nav-link" href=https://mxnet.incubator.apache.org/>1.0.0</a></li><li><a class="main-nav-link" href=https://mxnet.incubator.apache.org/versions/0.12.1/index.html>0.12.1</a></li><li><a class="main-nav-link" href=https://mxnet.incubator.apache.org/versions/0.12.0/index.html>0.12.0</a></li><li><a class="main-nav-link" href=https://mxnet.incubator.apache.org/versions/0.11.0/index.html>0.11.0</a></li><li><a class="main-nav-link" href=https://mxnet.incubator.apache.org/versions/master/index.html>master</a></li></ul></span></nav> |
| <script> function getRootPath(){ return "../../../" } </script> |
| <div class="burgerIcon dropdown"> |
| <a class="dropdown-toggle" data-toggle="dropdown" href="#" role="button">☰</a> |
| <ul class="dropdown-menu" id="burgerMenu"> |
| <li><a href="../../../install/index.html">Install</a></li> |
| <li><a class="main-nav-link" href="../../../tutorials/index.html">Tutorials</a></li> |
| <li class="dropdown-submenu"> |
| <a href="#" tabindex="-1">Community</a> |
| <ul class="dropdown-menu"> |
| <li><a href="../../../community/index.html" tabindex="-1">Community</a></li> |
| <li><a href="../../../community/contribute.html" tabindex="-1">Contribute</a></li> |
| <li><a href="../../../community/powered_by.html" tabindex="-1">Powered By</a></li> |
| </ul> |
| </li> |
| <li class="dropdown-submenu"> |
| <a href="#" tabindex="-1">API</a> |
| <ul class="dropdown-menu"> |
| <li><a href="../../../api/python/index.html" tabindex="-1">Python</a> |
| </li> |
| <li><a href="../../../api/scala/index.html" tabindex="-1">Scala</a> |
| </li> |
| <li><a href="../../../api/r/index.html" tabindex="-1">R</a> |
| </li> |
| <li><a href="../../../api/julia/index.html" tabindex="-1">Julia</a> |
| </li> |
| <li><a href="../../../api/c++/index.html" tabindex="-1">C++</a> |
| </li> |
| <li><a href="../../../api/perl/index.html" tabindex="-1">Perl</a> |
| </li> |
| </ul> |
| </li> |
| <li class="dropdown-submenu"> |
| <a href="#" tabindex="-1">Docs</a> |
| <ul class="dropdown-menu"> |
| <li><a href="../../../tutorials/index.html" tabindex="-1">Tutorials</a></li> |
| <li><a href="../../../faq/index.html" tabindex="-1">FAQ</a></li> |
| <li><a href="../../../architecture/index.html" tabindex="-1">Architecture</a></li> |
| <li><a href="https://github.com/apache/incubator-mxnet/tree/1.0.0/example" tabindex="-1">Examples</a></li> |
| <li><a href="../../../model_zoo/index.html" tabindex="-1">Model Zoo</a></li> |
| </ul> |
| </li> |
| <li><a href="../../../architecture/index.html">Architecture</a></li> |
| <li><a class="main-nav-link" href="https://github.com/dmlc/mxnet">Github</a></li> |
| <li id="dropdown-menu-position-anchor-version-mobile" class="dropdown-submenu" style="position: relative"><a href="#" tabindex="-1">Versions(1.0.0)</a><ul class="dropdown-menu"><li><a tabindex="-1" href=https://mxnet.incubator.apache.org/>1.0.0</a></li><li><a tabindex="-1" href=https://mxnet.incubator.apache.org/versions/0.12.1/index.html>0.12.1</a></li><li><a tabindex="-1" href=https://mxnet.incubator.apache.org/versions/0.12.0/index.html>0.12.0</a></li><li><a tabindex="-1" href=https://mxnet.incubator.apache.org/versions/0.11.0/index.html>0.11.0</a></li><li><a tabindex="-1" href=https://mxnet.incubator.apache.org/versions/master/index.html>master</a></li></ul></li></ul> |
| </div> |
| <div class="plusIcon dropdown"> |
| <a class="dropdown-toggle" data-toggle="dropdown" href="#" role="button"><span aria-hidden="true" class="glyphicon glyphicon-plus"></span></a> |
| <ul class="dropdown-menu dropdown-menu-right" id="plusMenu"></ul> |
| </div> |
| <div id="search-input-wrap"> |
| <form action="../../../search.html" autocomplete="off" class="" method="get" role="search"> |
| <div class="form-group inner-addon left-addon"> |
| <i class="glyphicon glyphicon-search"></i> |
| <input class="form-control" name="q" placeholder="Search" type="text"/> |
| </div> |
| <input name="check_keywords" type="hidden" value="yes"> |
| <input name="area" type="hidden" value="default"/> |
| </input></form> |
| <div id="search-preview"></div> |
| </div> |
| <div id="searchIcon"> |
| <span aria-hidden="true" class="glyphicon glyphicon-search"></span> |
| </div> |
| <!-- <div id="lang-select-wrap"> --> |
| <!-- <label id="lang-select-label"> --> |
| <!-- <\!-- <i class="fa fa-globe"></i> -\-> --> |
| <!-- <span></span> --> |
| <!-- </label> --> |
| <!-- <select id="lang-select"> --> |
| <!-- <option value="en">Eng</option> --> |
| <!-- <option value="zh">中文</option> --> |
| <!-- </select> --> |
| <!-- </div> --> |
| <!-- <a id="mobile-nav-toggle"> |
| <span class="mobile-nav-toggle-bar"></span> |
| <span class="mobile-nav-toggle-bar"></span> |
| <span class="mobile-nav-toggle-bar"></span> |
| </a> --> |
| </div> |
| </div> |
| </div> |
| <script type="text/javascript"> |
| $('body').css('background', 'white'); |
| </script> |
| <div class="container"> |
| <div class="row"> |
| <div aria-label="main navigation" class="sphinxsidebar leftsidebar" role="navigation"> |
| <div class="sphinxsidebarwrapper"> |
| <ul class="current"> |
| <li class="toctree-l1 current"><a class="reference internal" href="../index.html">Python Documents</a><ul class="current"> |
| <li class="toctree-l2"><a class="reference internal" href="../index.html#ndarray-api">NDArray API</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../index.html#symbol-api">Symbol API</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../index.html#module-api">Module API</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../index.html#autograd-api">Autograd API</a></li> |
| <li class="toctree-l2 current"><a class="reference internal" href="../index.html#gluon-api">Gluon API</a><ul class="current"> |
| <li class="toctree-l3 current"><a class="reference internal" href="gluon.html">Gluon Package</a><ul class="current"> |
| <li class="toctree-l4 current"><a class="reference internal" href="gluon.html#overview">Overview</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="gluon.html#parameter">Parameter</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="gluon.html#containers">Containers</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="gluon.html#trainer">Trainer</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="gluon.html#utilities">Utilities</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="gluon.html#api-reference">API Reference</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="nn.html">Gluon Neural Network Layers</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="rnn.html">Gluon Recurrent Neural Network API</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="loss.html">Gluon Loss API</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="data.html">Gluon Data API</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="model_zoo.html">Gluon Model Zoo</a></li> |
| <li class="toctree-l3 current"><a class="current reference internal" href="">Gluon Contrib API</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="#overview">Overview</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="#contrib">Contrib</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="#api-reference">API Reference</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../index.html#kvstore-api">KVStore API</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../index.html#io-api">IO API</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../index.html#image-api">Image API</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../index.html#optimization-api">Optimization API</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../index.html#callback-api">Callback API</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../index.html#metric-api">Metric API</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../index.html#run-time-compilation-api">Run-Time Compilation API</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l1"><a class="reference internal" href="../../r/index.html">R Documents</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../../julia/index.html">Julia Documents</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../../c++/index.html">C++ Documents</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../../scala/index.html">Scala Documents</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../../perl/index.html">Perl Documents</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../../../faq/index.html">HowTo Documents</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../../../architecture/index.html">System Documents</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../../../tutorials/index.html">Tutorials</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../../../community/index.html">Community</a></li> |
| </ul> |
| </div> |
| </div> |
| <div class="content"> |
| <div class="page-tracker"></div> |
| <div class="section" id="gluon-contrib-api"> |
| <span id="gluon-contrib-api"></span><h1>Gluon Contrib API<a class="headerlink" href="#gluon-contrib-api" title="Permalink to this headline">¶</a></h1> |
| <div class="section" id="overview"> |
| <span id="overview"></span><h2>Overview<a class="headerlink" href="#overview" title="Permalink to this headline">¶</a></h2> |
| <p>This document lists the contrib APIs in Gluon:</p> |
| <table border="1" class="longtable docutils"> |
| <colgroup> |
| <col width="10%"/> |
| <col width="90%"/> |
| </colgroup> |
| <tbody valign="top"> |
| <tr class="row-odd"><td><a class="reference internal" href="#module-mxnet.gluon.contrib" title="mxnet.gluon.contrib"><code class="xref py py-obj docutils literal"><span class="pre">mxnet.gluon.contrib</span></code></a></td> |
| <td>Contrib neural network module.</td> |
| </tr> |
| </tbody> |
| </table> |
| <p>The <code class="docutils literal"><span class="pre">Gluon</span> <span class="pre">Contrib</span></code> API, defined in the <code class="docutils literal"><span class="pre">gluon.contrib</span></code> package, provides |
| many useful experimental APIs for new features. |
| This is a place for the community to try out the new features, |
| so that feature contributors can receive feedback.</p> |
| <div class="admonition warning"> |
| <p class="first admonition-title">Warning</p> |
| <p class="last">This package contains experimental APIs and may change in the near future.</p> |
| </div> |
| <p>In the rest of this document, we list routines provided by the <code class="docutils literal"><span class="pre">gluon.contrib</span></code> package.</p> |
| </div> |
| <div class="section" id="contrib"> |
| <span id="contrib"></span><h2>Contrib<a class="headerlink" href="#contrib" title="Permalink to this headline">¶</a></h2> |
| <table border="1" class="longtable docutils"> |
| <colgroup> |
| <col width="10%"/> |
| <col width="90%"/> |
| </colgroup> |
| <tbody valign="top"> |
| <tr class="row-odd"><td><a class="reference internal" href="#mxnet.gluon.contrib.rnn.VariationalDropoutCell" title="mxnet.gluon.contrib.rnn.VariationalDropoutCell"><code class="xref py py-obj docutils literal"><span class="pre">rnn.VariationalDropoutCell</span></code></a></td> |
| <td>Applies Variational Dropout on base cell.</td> |
| </tr> |
| <tr class="row-even"><td><a class="reference internal" href="#mxnet.gluon.contrib.rnn.Conv1DRNNCell" title="mxnet.gluon.contrib.rnn.Conv1DRNNCell"><code class="xref py py-obj docutils literal"><span class="pre">rnn.Conv1DRNNCell</span></code></a></td> |
| <td>1D Convolutional RNN cell.</td> |
| </tr> |
| <tr class="row-odd"><td><a class="reference internal" href="#mxnet.gluon.contrib.rnn.Conv2DRNNCell" title="mxnet.gluon.contrib.rnn.Conv2DRNNCell"><code class="xref py py-obj docutils literal"><span class="pre">rnn.Conv2DRNNCell</span></code></a></td> |
| <td>2D Convolutional RNN cell.</td> |
| </tr> |
| <tr class="row-even"><td><a class="reference internal" href="#mxnet.gluon.contrib.rnn.Conv3DRNNCell" title="mxnet.gluon.contrib.rnn.Conv3DRNNCell"><code class="xref py py-obj docutils literal"><span class="pre">rnn.Conv3DRNNCell</span></code></a></td> |
| <td>3D Convolutional RNN cells</td> |
| </tr> |
| <tr class="row-odd"><td><a class="reference internal" href="#mxnet.gluon.contrib.rnn.Conv1DLSTMCell" title="mxnet.gluon.contrib.rnn.Conv1DLSTMCell"><code class="xref py py-obj docutils literal"><span class="pre">rnn.Conv1DLSTMCell</span></code></a></td> |
| <td>1D Convolutional LSTM network cell.</td> |
| </tr> |
| <tr class="row-even"><td><a class="reference internal" href="#mxnet.gluon.contrib.rnn.Conv2DLSTMCell" title="mxnet.gluon.contrib.rnn.Conv2DLSTMCell"><code class="xref py py-obj docutils literal"><span class="pre">rnn.Conv2DLSTMCell</span></code></a></td> |
| <td>2D Convolutional LSTM network cell.</td> |
| </tr> |
| <tr class="row-odd"><td><a class="reference internal" href="#mxnet.gluon.contrib.rnn.Conv3DLSTMCell" title="mxnet.gluon.contrib.rnn.Conv3DLSTMCell"><code class="xref py py-obj docutils literal"><span class="pre">rnn.Conv3DLSTMCell</span></code></a></td> |
| <td>3D Convolutional LSTM network cell.</td> |
| </tr> |
| <tr class="row-even"><td><a class="reference internal" href="#mxnet.gluon.contrib.rnn.Conv1DGRUCell" title="mxnet.gluon.contrib.rnn.Conv1DGRUCell"><code class="xref py py-obj docutils literal"><span class="pre">rnn.Conv1DGRUCell</span></code></a></td> |
| <td>1D Convolutional Gated Rectified Unit (GRU) network cell.</td> |
| </tr> |
| <tr class="row-odd"><td><a class="reference internal" href="#mxnet.gluon.contrib.rnn.Conv2DGRUCell" title="mxnet.gluon.contrib.rnn.Conv2DGRUCell"><code class="xref py py-obj docutils literal"><span class="pre">rnn.Conv2DGRUCell</span></code></a></td> |
| <td>2D Convolutional Gated Rectified Unit (GRU) network cell.</td> |
| </tr> |
| <tr class="row-even"><td><a class="reference internal" href="#mxnet.gluon.contrib.rnn.Conv3DGRUCell" title="mxnet.gluon.contrib.rnn.Conv3DGRUCell"><code class="xref py py-obj docutils literal"><span class="pre">rnn.Conv3DGRUCell</span></code></a></td> |
| <td>3D Convolutional Gated Rectified Unit (GRU) network cell.</td> |
| </tr> |
| </tbody> |
| </table> |
| </div> |
| <div class="section" id="api-reference"> |
| <span id="api-reference"></span><h2>API Reference<a class="headerlink" href="#api-reference" title="Permalink to this headline">¶</a></h2> |
| <script src="../../../_static/js/auto_module_index.js" type="text/javascript"></script><span class="target" id="module-mxnet.gluon.contrib"></span><p>Contrib neural network module.</p> |
| <span class="target" id="module-mxnet.gluon.contrib.rnn"></span><p>Contrib recurrent neural network module.</p> |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.rnn.Conv1DRNNCell"> |
| <em class="property">class </em><code class="descclassname">mxnet.gluon.contrib.rnn.</code><code class="descname">Conv1DRNNCell</code><span class="sig-paren">(</span><em>input_shape</em>, <em>hidden_channels</em>, <em>i2h_kernel</em>, <em>h2h_kernel</em>, <em>i2h_pad=(0</em>, <em>)</em>, <em>i2h_dilate=(1</em>, <em>)</em>, <em>h2h_dilate=(1</em>, <em>)</em>, <em>i2h_weight_initializer=None</em>, <em>h2h_weight_initializer=None</em>, <em>i2h_bias_initializer='zeros'</em>, <em>h2h_bias_initializer='zeros'</em>, <em>conv_layout='NCW'</em>, <em>activation='tanh'</em>, <em>prefix=None</em>, <em>params=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/rnn/conv_rnn_cell.html#Conv1DRNNCell"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.rnn.Conv1DRNNCell" title="Permalink to this definition">¶</a></dt> |
| <dd><p>1D Convolutional RNN cell.</p> |
| <div class="math"> |
| \[h_t = tanh(W_i \ast x_t + R_i \ast h_{t-1} + b_i)\]</div> |
| <table class="docutils field-list" frame="void" rules="none"> |
| <col class="field-name"/> |
| <col class="field-body"/> |
| <tbody valign="top"> |
| <tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first last simple"> |
| <li><strong>input_shape</strong> (<em>tuple of int</em>) – Input tensor shape at each time step for each sample, excluding dimension of the batch size |
| and sequence length. Must be consistent with <cite>conv_layout</cite>. |
| For example, for layout ‘NCW’ the shape should be (C, W).</li> |
| <li><strong>hidden_channels</strong> (<em>int</em>) – Number of output channels.</li> |
| <li><strong>i2h_kernel</strong> (<em>int or tuple of int</em>) – Input convolution kernel sizes.</li> |
| <li><strong>h2h_kernel</strong> (<em>int or tuple of int</em>) – Recurrent convolution kernel sizes. Only odd-numbered sizes are supported.</li> |
| <li><strong>i2h_pad</strong> (<em>int or tuple of int, default (0,)</em>) – Pad for input convolution.</li> |
| <li><strong>i2h_dilate</strong> (<em>int or tuple of int, default (1,)</em>) – Input convolution dilate.</li> |
| <li><strong>h2h_dilate</strong> (<em>int or tuple of int, default (1,)</em>) – Recurrent convolution dilate.</li> |
| <li><strong>i2h_weight_initializer</strong> (<em>str or Initializer</em>) – Initializer for the input weights matrix, used for the input convolutions.</li> |
| <li><strong>h2h_weight_initializer</strong> (<em>str or Initializer</em>) – Initializer for the recurrent weights matrix, used for the input convolutions.</li> |
| <li><strong>i2h_bias_initializer</strong> (<em>str or Initializer, default zeros</em>) – Initializer for the input convolution bias vectors.</li> |
| <li><strong>h2h_bias_initializer</strong> (<em>str or Initializer, default zeros</em>) – Initializer for the recurrent convolution bias vectors.</li> |
| <li><strong>conv_layout</strong> (<em>str, default 'NCW'</em>) – Layout for all convolution inputs, outputs and weights. Options are ‘NCW’ and ‘NWC’.</li> |
| <li><strong>activation</strong> (<em>str or Block, default 'tanh'</em>) – Type of activation function. |
| If argument type is string, it’s equivalent to nn.Activation(act_type=str). See |
| <a class="reference internal" href="../ndarray/ndarray.html#mxnet.ndarray.Activation" title="mxnet.ndarray.Activation"><code class="xref py py-func docutils literal"><span class="pre">Activation()</span></code></a> for available choices. |
| Alternatively, other activation blocks such as nn.LeakyReLU can be used.</li> |
| <li><strong>prefix</strong> (str, default ‘<a href="#id3"><span class="problematic" id="id4">conv_rnn_</span></a>‘) – Prefix for name of layers (and name of weight if params is None).</li> |
| <li><strong>params</strong> (<em>RNNParams, default None</em>) – Container for weight sharing between cells. Created if None.</li> |
| </ul> |
| </td> |
| </tr> |
| </tbody> |
| </table> |
| </dd></dl> |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.rnn.Conv2DRNNCell"> |
| <em class="property">class </em><code class="descclassname">mxnet.gluon.contrib.rnn.</code><code class="descname">Conv2DRNNCell</code><span class="sig-paren">(</span><em>input_shape</em>, <em>hidden_channels</em>, <em>i2h_kernel</em>, <em>h2h_kernel</em>, <em>i2h_pad=(0</em>, <em>0)</em>, <em>i2h_dilate=(1</em>, <em>1)</em>, <em>h2h_dilate=(1</em>, <em>1)</em>, <em>i2h_weight_initializer=None</em>, <em>h2h_weight_initializer=None</em>, <em>i2h_bias_initializer='zeros'</em>, <em>h2h_bias_initializer='zeros'</em>, <em>conv_layout='NCHW'</em>, <em>activation='tanh'</em>, <em>prefix=None</em>, <em>params=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/rnn/conv_rnn_cell.html#Conv2DRNNCell"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.rnn.Conv2DRNNCell" title="Permalink to this definition">¶</a></dt> |
| <dd><p>2D Convolutional RNN cell.</p> |
| <div class="math"> |
| \[h_t = tanh(W_i \ast x_t + R_i \ast h_{t-1} + b_i)\]</div> |
| <table class="docutils field-list" frame="void" rules="none"> |
| <col class="field-name"/> |
| <col class="field-body"/> |
| <tbody valign="top"> |
| <tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first last simple"> |
| <li><strong>input_shape</strong> (<em>tuple of int</em>) – Input tensor shape at each time step for each sample, excluding dimension of the batch size |
| and sequence length. Must be consistent with <cite>conv_layout</cite>. |
| For example, for layout ‘NCHW’ the shape should be (C, H, W).</li> |
| <li><strong>hidden_channels</strong> (<em>int</em>) – Number of output channels.</li> |
| <li><strong>i2h_kernel</strong> (<em>int or tuple of int</em>) – Input convolution kernel sizes.</li> |
| <li><strong>h2h_kernel</strong> (<em>int or tuple of int</em>) – Recurrent convolution kernel sizes. Only odd-numbered sizes are supported.</li> |
| <li><strong>i2h_pad</strong> (<em>int or tuple of int, default (0, 0)</em>) – Pad for input convolution.</li> |
| <li><strong>i2h_dilate</strong> (<em>int or tuple of int, default (1, 1)</em>) – Input convolution dilate.</li> |
| <li><strong>h2h_dilate</strong> (<em>int or tuple of int, default (1, 1)</em>) – Recurrent convolution dilate.</li> |
| <li><strong>i2h_weight_initializer</strong> (<em>str or Initializer</em>) – Initializer for the input weights matrix, used for the input convolutions.</li> |
| <li><strong>h2h_weight_initializer</strong> (<em>str or Initializer</em>) – Initializer for the recurrent weights matrix, used for the input convolutions.</li> |
| <li><strong>i2h_bias_initializer</strong> (<em>str or Initializer, default zeros</em>) – Initializer for the input convolution bias vectors.</li> |
| <li><strong>h2h_bias_initializer</strong> (<em>str or Initializer, default zeros</em>) – Initializer for the recurrent convolution bias vectors.</li> |
| <li><strong>conv_layout</strong> (<em>str, default 'NCHW'</em>) – Layout for all convolution inputs, outputs and weights. Options are ‘NCHW’ and ‘NHWC’.</li> |
| <li><strong>activation</strong> (<em>str or Block, default 'tanh'</em>) – Type of activation function. |
| If argument type is string, it’s equivalent to nn.Activation(act_type=str). See |
| <a class="reference internal" href="../ndarray/ndarray.html#mxnet.ndarray.Activation" title="mxnet.ndarray.Activation"><code class="xref py py-func docutils literal"><span class="pre">Activation()</span></code></a> for available choices. |
| Alternatively, other activation blocks such as nn.LeakyReLU can be used.</li> |
| <li><strong>prefix</strong> (str, default ‘<a href="#id5"><span class="problematic" id="id6">conv_rnn_</span></a>‘) – Prefix for name of layers (and name of weight if params is None).</li> |
| <li><strong>params</strong> (<em>RNNParams, default None</em>) – Container for weight sharing between cells. Created if None.</li> |
| </ul> |
| </td> |
| </tr> |
| </tbody> |
| </table> |
| </dd></dl> |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.rnn.Conv3DRNNCell"> |
| <em class="property">class </em><code class="descclassname">mxnet.gluon.contrib.rnn.</code><code class="descname">Conv3DRNNCell</code><span class="sig-paren">(</span><em>input_shape</em>, <em>hidden_channels</em>, <em>i2h_kernel</em>, <em>h2h_kernel</em>, <em>i2h_pad=(0</em>, <em>0</em>, <em>0)</em>, <em>i2h_dilate=(1</em>, <em>1</em>, <em>1)</em>, <em>h2h_dilate=(1</em>, <em>1</em>, <em>1)</em>, <em>i2h_weight_initializer=None</em>, <em>h2h_weight_initializer=None</em>, <em>i2h_bias_initializer='zeros'</em>, <em>h2h_bias_initializer='zeros'</em>, <em>conv_layout='NCDHW'</em>, <em>activation='tanh'</em>, <em>prefix=None</em>, <em>params=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/rnn/conv_rnn_cell.html#Conv3DRNNCell"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.rnn.Conv3DRNNCell" title="Permalink to this definition">¶</a></dt> |
| <dd><p>3D Convolutional RNN cells</p> |
| <div class="math"> |
| \[h_t = tanh(W_i \ast x_t + R_i \ast h_{t-1} + b_i)\]</div> |
| <table class="docutils field-list" frame="void" rules="none"> |
| <col class="field-name"/> |
| <col class="field-body"/> |
| <tbody valign="top"> |
| <tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first last simple"> |
| <li><strong>input_shape</strong> (<em>tuple of int</em>) – Input tensor shape at each time step for each sample, excluding dimension of the batch size |
| and sequence length. Must be consistent with <cite>conv_layout</cite>. |
| For example, for layout ‘NCDHW’ the shape should be (C, D, H, W).</li> |
| <li><strong>hidden_channels</strong> (<em>int</em>) – Number of output channels.</li> |
| <li><strong>i2h_kernel</strong> (<em>int or tuple of int</em>) – Input convolution kernel sizes.</li> |
| <li><strong>h2h_kernel</strong> (<em>int or tuple of int</em>) – Recurrent convolution kernel sizes. Only odd-numbered sizes are supported.</li> |
| <li><strong>i2h_pad</strong> (<em>int or tuple of int, default (0, 0, 0)</em>) – Pad for input convolution.</li> |
| <li><strong>i2h_dilate</strong> (<em>int or tuple of int, default (1, 1, 1)</em>) – Input convolution dilate.</li> |
| <li><strong>h2h_dilate</strong> (<em>int or tuple of int, default (1, 1, 1)</em>) – Recurrent convolution dilate.</li> |
| <li><strong>i2h_weight_initializer</strong> (<em>str or Initializer</em>) – Initializer for the input weights matrix, used for the input convolutions.</li> |
| <li><strong>h2h_weight_initializer</strong> (<em>str or Initializer</em>) – Initializer for the recurrent weights matrix, used for the input convolutions.</li> |
| <li><strong>i2h_bias_initializer</strong> (<em>str or Initializer, default zeros</em>) – Initializer for the input convolution bias vectors.</li> |
| <li><strong>h2h_bias_initializer</strong> (<em>str or Initializer, default zeros</em>) – Initializer for the recurrent convolution bias vectors.</li> |
| <li><strong>conv_layout</strong> (<em>str, default 'NCDHW'</em>) – Layout for all convolution inputs, outputs and weights. Options are ‘NCDHW’ and ‘NDHWC’.</li> |
| <li><strong>activation</strong> (<em>str or Block, default 'tanh'</em>) – Type of activation function. |
| If argument type is string, it’s equivalent to nn.Activation(act_type=str). See |
| <a class="reference internal" href="../ndarray/ndarray.html#mxnet.ndarray.Activation" title="mxnet.ndarray.Activation"><code class="xref py py-func docutils literal"><span class="pre">Activation()</span></code></a> for available choices. |
| Alternatively, other activation blocks such as nn.LeakyReLU can be used.</li> |
| <li><strong>prefix</strong> (str, default ‘<a href="#id7"><span class="problematic" id="id8">conv_rnn_</span></a>‘) – Prefix for name of layers (and name of weight if params is None).</li> |
| <li><strong>params</strong> (<em>RNNParams, default None</em>) – Container for weight sharing between cells. Created if None.</li> |
| </ul> |
| </td> |
| </tr> |
| </tbody> |
| </table> |
| </dd></dl> |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.rnn.Conv1DLSTMCell"> |
| <em class="property">class </em><code class="descclassname">mxnet.gluon.contrib.rnn.</code><code class="descname">Conv1DLSTMCell</code><span class="sig-paren">(</span><em>input_shape</em>, <em>hidden_channels</em>, <em>i2h_kernel</em>, <em>h2h_kernel</em>, <em>i2h_pad=(0</em>, <em>)</em>, <em>i2h_dilate=(1</em>, <em>)</em>, <em>h2h_dilate=(1</em>, <em>)</em>, <em>i2h_weight_initializer=None</em>, <em>h2h_weight_initializer=None</em>, <em>i2h_bias_initializer='zeros'</em>, <em>h2h_bias_initializer='zeros'</em>, <em>conv_layout='NCW'</em>, <em>activation='tanh'</em>, <em>prefix=None</em>, <em>params=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/rnn/conv_rnn_cell.html#Conv1DLSTMCell"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.rnn.Conv1DLSTMCell" title="Permalink to this definition">¶</a></dt> |
| <dd><p>1D Convolutional LSTM network cell.</p> |
| <p><a class="reference external" href="https://arxiv.org/abs/1506.04214">“Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting”</a> paper. Xingjian et al. NIPS2015</p> |
| <div class="math"> |
| \[\begin{split}\begin{array}{ll} |
| i_t = \sigma(W_i \ast x_t + R_i \ast h_{t-1} + b_i) \\ |
| f_t = \sigma(W_f \ast x_t + R_f \ast h_{t-1} + b_f) \\ |
| o_t = \sigma(W_o \ast x_t + R_o \ast h_{t-1} + b_o) \\ |
| c^\prime_t = tanh(W_c \ast x_t + R_c \ast h_{t-1} + b_c) \\ |
| c_t = f_t \circ c_{t-1} + i_t \circ c^\prime_t \\ |
| h_t = o_t \circ tanh(c_t) \\ |
| \end{array}\end{split}\]</div> |
| <table class="docutils field-list" frame="void" rules="none"> |
| <col class="field-name"/> |
| <col class="field-body"/> |
| <tbody valign="top"> |
| <tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first last simple"> |
| <li><strong>input_shape</strong> (<em>tuple of int</em>) – Input tensor shape at each time step for each sample, excluding dimension of the batch size |
| and sequence length. Must be consistent with <cite>conv_layout</cite>. |
| For example, for layout ‘NCW’ the shape should be (C, W).</li> |
| <li><strong>hidden_channels</strong> (<em>int</em>) – Number of output channels.</li> |
| <li><strong>i2h_kernel</strong> (<em>int or tuple of int</em>) – Input convolution kernel sizes.</li> |
| <li><strong>h2h_kernel</strong> (<em>int or tuple of int</em>) – Recurrent convolution kernel sizes. Only odd-numbered sizes are supported.</li> |
| <li><strong>i2h_pad</strong> (<em>int or tuple of int, default (0,)</em>) – Pad for input convolution.</li> |
| <li><strong>i2h_dilate</strong> (<em>int or tuple of int, default (1,)</em>) – Input convolution dilate.</li> |
| <li><strong>h2h_dilate</strong> (<em>int or tuple of int, default (1,)</em>) – Recurrent convolution dilate.</li> |
| <li><strong>i2h_weight_initializer</strong> (<em>str or Initializer</em>) – Initializer for the input weights matrix, used for the input convolutions.</li> |
| <li><strong>h2h_weight_initializer</strong> (<em>str or Initializer</em>) – Initializer for the recurrent weights matrix, used for the input convolutions.</li> |
| <li><strong>i2h_bias_initializer</strong> (<em>str or Initializer, default zeros</em>) – Initializer for the input convolution bias vectors.</li> |
| <li><strong>h2h_bias_initializer</strong> (<em>str or Initializer, default zeros</em>) – Initializer for the recurrent convolution bias vectors.</li> |
| <li><strong>conv_layout</strong> (<em>str, default 'NCW'</em>) – Layout for all convolution inputs, outputs and weights. Options are ‘NCW’ and ‘NWC’.</li> |
| <li><strong>activation</strong> (<em>str or Block, default 'tanh'</em>) – Type of activation function used in c^prime_t. |
| If argument type is string, it’s equivalent to nn.Activation(act_type=str). See |
| <a class="reference internal" href="../ndarray/ndarray.html#mxnet.ndarray.Activation" title="mxnet.ndarray.Activation"><code class="xref py py-func docutils literal"><span class="pre">Activation()</span></code></a> for available choices. |
| Alternatively, other activation blocks such as nn.LeakyReLU can be used.</li> |
| <li><strong>prefix</strong> (str, default ‘<a href="#id9"><span class="problematic" id="id10">conv_lstm_</span></a>‘) – Prefix for name of layers (and name of weight if params is None).</li> |
| <li><strong>params</strong> (<em>RNNParams, default None</em>) – Container for weight sharing between cells. Created if None.</li> |
| </ul> |
| </td> |
| </tr> |
| </tbody> |
| </table> |
| </dd></dl> |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.rnn.Conv2DLSTMCell"> |
| <em class="property">class </em><code class="descclassname">mxnet.gluon.contrib.rnn.</code><code class="descname">Conv2DLSTMCell</code><span class="sig-paren">(</span><em>input_shape</em>, <em>hidden_channels</em>, <em>i2h_kernel</em>, <em>h2h_kernel</em>, <em>i2h_pad=(0</em>, <em>0)</em>, <em>i2h_dilate=(1</em>, <em>1)</em>, <em>h2h_dilate=(1</em>, <em>1)</em>, <em>i2h_weight_initializer=None</em>, <em>h2h_weight_initializer=None</em>, <em>i2h_bias_initializer='zeros'</em>, <em>h2h_bias_initializer='zeros'</em>, <em>conv_layout='NCHW'</em>, <em>activation='tanh'</em>, <em>prefix=None</em>, <em>params=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/rnn/conv_rnn_cell.html#Conv2DLSTMCell"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.rnn.Conv2DLSTMCell" title="Permalink to this definition">¶</a></dt> |
| <dd><p>2D Convolutional LSTM network cell.</p> |
| <p><a class="reference external" href="https://arxiv.org/abs/1506.04214">“Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting”</a> paper. Xingjian et al. NIPS2015</p> |
| <div class="math"> |
| \[\begin{split}\begin{array}{ll} |
| i_t = \sigma(W_i \ast x_t + R_i \ast h_{t-1} + b_i) \\ |
| f_t = \sigma(W_f \ast x_t + R_f \ast h_{t-1} + b_f) \\ |
| o_t = \sigma(W_o \ast x_t + R_o \ast h_{t-1} + b_o) \\ |
| c^\prime_t = tanh(W_c \ast x_t + R_c \ast h_{t-1} + b_c) \\ |
| c_t = f_t \circ c_{t-1} + i_t \circ c^\prime_t \\ |
| h_t = o_t \circ tanh(c_t) \\ |
| \end{array}\end{split}\]</div> |
| <table class="docutils field-list" frame="void" rules="none"> |
| <col class="field-name"/> |
| <col class="field-body"/> |
| <tbody valign="top"> |
| <tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first last simple"> |
| <li><strong>input_shape</strong> (<em>tuple of int</em>) – Input tensor shape at each time step for each sample, excluding dimension of the batch size |
| and sequence length. Must be consistent with <cite>conv_layout</cite>. |
| For example, for layout ‘NCHW’ the shape should be (C, H, W).</li> |
| <li><strong>hidden_channels</strong> (<em>int</em>) – Number of output channels.</li> |
| <li><strong>i2h_kernel</strong> (<em>int or tuple of int</em>) – Input convolution kernel sizes.</li> |
| <li><strong>h2h_kernel</strong> (<em>int or tuple of int</em>) – Recurrent convolution kernel sizes. Only odd-numbered sizes are supported.</li> |
| <li><strong>i2h_pad</strong> (<em>int or tuple of int, default (0, 0)</em>) – Pad for input convolution.</li> |
| <li><strong>i2h_dilate</strong> (<em>int or tuple of int, default (1, 1)</em>) – Input convolution dilate.</li> |
| <li><strong>h2h_dilate</strong> (<em>int or tuple of int, default (1, 1)</em>) – Recurrent convolution dilate.</li> |
| <li><strong>i2h_weight_initializer</strong> (<em>str or Initializer</em>) – Initializer for the input weights matrix, used for the input convolutions.</li> |
| <li><strong>h2h_weight_initializer</strong> (<em>str or Initializer</em>) – Initializer for the recurrent weights matrix, used for the input convolutions.</li> |
| <li><strong>i2h_bias_initializer</strong> (<em>str or Initializer, default zeros</em>) – Initializer for the input convolution bias vectors.</li> |
| <li><strong>h2h_bias_initializer</strong> (<em>str or Initializer, default zeros</em>) – Initializer for the recurrent convolution bias vectors.</li> |
| <li><strong>conv_layout</strong> (<em>str, default 'NCHW'</em>) – Layout for all convolution inputs, outputs and weights. Options are ‘NCHW’ and ‘NHWC’.</li> |
| <li><strong>activation</strong> (<em>str or Block, default 'tanh'</em>) – Type of activation function used in c^prime_t. |
| If argument type is string, it’s equivalent to nn.Activation(act_type=str). See |
| <a class="reference internal" href="../ndarray/ndarray.html#mxnet.ndarray.Activation" title="mxnet.ndarray.Activation"><code class="xref py py-func docutils literal"><span class="pre">Activation()</span></code></a> for available choices. |
| Alternatively, other activation blocks such as nn.LeakyReLU can be used.</li> |
| <li><strong>prefix</strong> (str, default ‘<a href="#id11"><span class="problematic" id="id12">conv_lstm_</span></a>‘) – Prefix for name of layers (and name of weight if params is None).</li> |
| <li><strong>params</strong> (<em>RNNParams, default None</em>) – Container for weight sharing between cells. Created if None.</li> |
| </ul> |
| </td> |
| </tr> |
| </tbody> |
| </table> |
| </dd></dl> |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.rnn.Conv3DLSTMCell"> |
| <em class="property">class </em><code class="descclassname">mxnet.gluon.contrib.rnn.</code><code class="descname">Conv3DLSTMCell</code><span class="sig-paren">(</span><em>input_shape</em>, <em>hidden_channels</em>, <em>i2h_kernel</em>, <em>h2h_kernel</em>, <em>i2h_pad=(0</em>, <em>0</em>, <em>0)</em>, <em>i2h_dilate=(1</em>, <em>1</em>, <em>1)</em>, <em>h2h_dilate=(1</em>, <em>1</em>, <em>1)</em>, <em>i2h_weight_initializer=None</em>, <em>h2h_weight_initializer=None</em>, <em>i2h_bias_initializer='zeros'</em>, <em>h2h_bias_initializer='zeros'</em>, <em>conv_layout='NCDHW'</em>, <em>activation='tanh'</em>, <em>prefix=None</em>, <em>params=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/rnn/conv_rnn_cell.html#Conv3DLSTMCell"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.rnn.Conv3DLSTMCell" title="Permalink to this definition">¶</a></dt> |
| <dd><p>3D Convolutional LSTM network cell.</p> |
| <p><a class="reference external" href="https://arxiv.org/abs/1506.04214">“Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting”</a> paper. Xingjian et al. NIPS2015</p> |
| <div class="math"> |
| \[\begin{split}\begin{array}{ll} |
| i_t = \sigma(W_i \ast x_t + R_i \ast h_{t-1} + b_i) \\ |
| f_t = \sigma(W_f \ast x_t + R_f \ast h_{t-1} + b_f) \\ |
| o_t = \sigma(W_o \ast x_t + R_o \ast h_{t-1} + b_o) \\ |
| c^\prime_t = tanh(W_c \ast x_t + R_c \ast h_{t-1} + b_c) \\ |
| c_t = f_t \circ c_{t-1} + i_t \circ c^\prime_t \\ |
| h_t = o_t \circ tanh(c_t) \\ |
| \end{array}\end{split}\]</div> |
| <table class="docutils field-list" frame="void" rules="none"> |
| <col class="field-name"/> |
| <col class="field-body"/> |
| <tbody valign="top"> |
| <tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first last simple"> |
| <li><strong>input_shape</strong> (<em>tuple of int</em>) – Input tensor shape at each time step for each sample, excluding dimension of the batch size |
| and sequence length. Must be consistent with <cite>conv_layout</cite>. |
| For example, for layout ‘NCDHW’ the shape should be (C, D, H, W).</li> |
| <li><strong>hidden_channels</strong> (<em>int</em>) – Number of output channels.</li> |
| <li><strong>i2h_kernel</strong> (<em>int or tuple of int</em>) – Input convolution kernel sizes.</li> |
| <li><strong>h2h_kernel</strong> (<em>int or tuple of int</em>) – Recurrent convolution kernel sizes. Only odd-numbered sizes are supported.</li> |
| <li><strong>i2h_pad</strong> (<em>int or tuple of int, default (0, 0, 0)</em>) – Pad for input convolution.</li> |
| <li><strong>i2h_dilate</strong> (<em>int or tuple of int, default (1, 1, 1)</em>) – Input convolution dilate.</li> |
| <li><strong>h2h_dilate</strong> (<em>int or tuple of int, default (1, 1, 1)</em>) – Recurrent convolution dilate.</li> |
| <li><strong>i2h_weight_initializer</strong> (<em>str or Initializer</em>) – Initializer for the input weights matrix, used for the input convolutions.</li> |
| <li><strong>h2h_weight_initializer</strong> (<em>str or Initializer</em>) – Initializer for the recurrent weights matrix, used for the input convolutions.</li> |
| <li><strong>i2h_bias_initializer</strong> (<em>str or Initializer, default zeros</em>) – Initializer for the input convolution bias vectors.</li> |
| <li><strong>h2h_bias_initializer</strong> (<em>str or Initializer, default zeros</em>) – Initializer for the recurrent convolution bias vectors.</li> |
| <li><strong>conv_layout</strong> (<em>str, default 'NCDHW'</em>) – Layout for all convolution inputs, outputs and weights. Options are ‘NCDHW’ and ‘NDHWC’.</li> |
| <li><strong>activation</strong> (<em>str or Block, default 'tanh'</em>) – Type of activation function used in c^prime_t. |
| If argument type is string, it’s equivalent to nn.Activation(act_type=str). See |
| <a class="reference internal" href="../ndarray/ndarray.html#mxnet.ndarray.Activation" title="mxnet.ndarray.Activation"><code class="xref py py-func docutils literal"><span class="pre">Activation()</span></code></a> for available choices. |
| Alternatively, other activation blocks such as nn.LeakyReLU can be used.</li> |
| <li><strong>prefix</strong> (str, default ‘<a href="#id13"><span class="problematic" id="id14">conv_lstm_</span></a>‘) – Prefix for name of layers (and name of weight if params is None).</li> |
| <li><strong>params</strong> (<em>RNNParams, default None</em>) – Container for weight sharing between cells. Created if None.</li> |
| </ul> |
| </td> |
| </tr> |
| </tbody> |
| </table> |
| </dd></dl> |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.rnn.Conv1DGRUCell"> |
| <em class="property">class </em><code class="descclassname">mxnet.gluon.contrib.rnn.</code><code class="descname">Conv1DGRUCell</code><span class="sig-paren">(</span><em>input_shape</em>, <em>hidden_channels</em>, <em>i2h_kernel</em>, <em>h2h_kernel</em>, <em>i2h_pad=(0</em>, <em>)</em>, <em>i2h_dilate=(1</em>, <em>)</em>, <em>h2h_dilate=(1</em>, <em>)</em>, <em>i2h_weight_initializer=None</em>, <em>h2h_weight_initializer=None</em>, <em>i2h_bias_initializer='zeros'</em>, <em>h2h_bias_initializer='zeros'</em>, <em>conv_layout='NCW'</em>, <em>activation='tanh'</em>, <em>prefix=None</em>, <em>params=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/rnn/conv_rnn_cell.html#Conv1DGRUCell"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.rnn.Conv1DGRUCell" title="Permalink to this definition">¶</a></dt> |
| <dd><p>1D Convolutional Gated Rectified Unit (GRU) network cell.</p> |
| <div class="math"> |
| \[\begin{split}\begin{array}{ll} |
| r_t = \sigma(W_r \ast x_t + R_r \ast h_{t-1} + b_r) \\ |
| z_t = \sigma(W_z \ast x_t + R_z \ast h_{t-1} + b_z) \\ |
| n_t = tanh(W_i \ast x_t + b_i + r_t \circ (R_n \ast h_{t-1} + b_n)) \\ |
| h^\prime_t = (1 - z_t) \circ n_t + z_t \circ h \\ |
| \end{array}\end{split}\]</div> |
| <table class="docutils field-list" frame="void" rules="none"> |
| <col class="field-name"/> |
| <col class="field-body"/> |
| <tbody valign="top"> |
| <tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first last simple"> |
| <li><strong>input_shape</strong> (<em>tuple of int</em>) – Input tensor shape at each time step for each sample, excluding dimension of the batch size |
| and sequence length. Must be consistent with <cite>conv_layout</cite>. |
| For example, for layout ‘NCW’ the shape should be (C, W).</li> |
| <li><strong>hidden_channels</strong> (<em>int</em>) – Number of output channels.</li> |
| <li><strong>i2h_kernel</strong> (<em>int or tuple of int</em>) – Input convolution kernel sizes.</li> |
| <li><strong>h2h_kernel</strong> (<em>int or tuple of int</em>) – Recurrent convolution kernel sizes. Only odd-numbered sizes are supported.</li> |
| <li><strong>i2h_pad</strong> (<em>int or tuple of int, default (0,)</em>) – Pad for input convolution.</li> |
| <li><strong>i2h_dilate</strong> (<em>int or tuple of int, default (1,)</em>) – Input convolution dilate.</li> |
| <li><strong>h2h_dilate</strong> (<em>int or tuple of int, default (1,)</em>) – Recurrent convolution dilate.</li> |
| <li><strong>i2h_weight_initializer</strong> (<em>str or Initializer</em>) – Initializer for the input weights matrix, used for the input convolutions.</li> |
| <li><strong>h2h_weight_initializer</strong> (<em>str or Initializer</em>) – Initializer for the recurrent weights matrix, used for the input convolutions.</li> |
| <li><strong>i2h_bias_initializer</strong> (<em>str or Initializer, default zeros</em>) – Initializer for the input convolution bias vectors.</li> |
| <li><strong>h2h_bias_initializer</strong> (<em>str or Initializer, default zeros</em>) – Initializer for the recurrent convolution bias vectors.</li> |
| <li><strong>conv_layout</strong> (<em>str, default 'NCW'</em>) – Layout for all convolution inputs, outputs and weights. Options are ‘NCW’ and ‘NWC’.</li> |
| <li><strong>activation</strong> (<em>str or Block, default 'tanh'</em>) – Type of activation function used in n_t. |
| If argument type is string, it’s equivalent to nn.Activation(act_type=str). See |
| <a class="reference internal" href="../ndarray/ndarray.html#mxnet.ndarray.Activation" title="mxnet.ndarray.Activation"><code class="xref py py-func docutils literal"><span class="pre">Activation()</span></code></a> for available choices. |
| Alternatively, other activation blocks such as nn.LeakyReLU can be used.</li> |
| <li><strong>prefix</strong> (str, default ‘<a href="#id15"><span class="problematic" id="id16">conv_gru_</span></a>‘) – Prefix for name of layers (and name of weight if params is None).</li> |
| <li><strong>params</strong> (<em>RNNParams, default None</em>) – Container for weight sharing between cells. Created if None.</li> |
| </ul> |
| </td> |
| </tr> |
| </tbody> |
| </table> |
| </dd></dl> |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.rnn.Conv2DGRUCell"> |
| <em class="property">class </em><code class="descclassname">mxnet.gluon.contrib.rnn.</code><code class="descname">Conv2DGRUCell</code><span class="sig-paren">(</span><em>input_shape</em>, <em>hidden_channels</em>, <em>i2h_kernel</em>, <em>h2h_kernel</em>, <em>i2h_pad=(0</em>, <em>0)</em>, <em>i2h_dilate=(1</em>, <em>1)</em>, <em>h2h_dilate=(1</em>, <em>1)</em>, <em>i2h_weight_initializer=None</em>, <em>h2h_weight_initializer=None</em>, <em>i2h_bias_initializer='zeros'</em>, <em>h2h_bias_initializer='zeros'</em>, <em>conv_layout='NCHW'</em>, <em>activation='tanh'</em>, <em>prefix=None</em>, <em>params=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/rnn/conv_rnn_cell.html#Conv2DGRUCell"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.rnn.Conv2DGRUCell" title="Permalink to this definition">¶</a></dt> |
| <dd><p>2D Convolutional Gated Rectified Unit (GRU) network cell.</p> |
| <div class="math"> |
| \[\begin{split}\begin{array}{ll} |
| r_t = \sigma(W_r \ast x_t + R_r \ast h_{t-1} + b_r) \\ |
| z_t = \sigma(W_z \ast x_t + R_z \ast h_{t-1} + b_z) \\ |
| n_t = tanh(W_i \ast x_t + b_i + r_t \circ (R_n \ast h_{t-1} + b_n)) \\ |
| h^\prime_t = (1 - z_t) \circ n_t + z_t \circ h \\ |
| \end{array}\end{split}\]</div> |
| <table class="docutils field-list" frame="void" rules="none"> |
| <col class="field-name"/> |
| <col class="field-body"/> |
| <tbody valign="top"> |
| <tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first last simple"> |
| <li><strong>input_shape</strong> (<em>tuple of int</em>) – Input tensor shape at each time step for each sample, excluding dimension of the batch size |
| and sequence length. Must be consistent with <cite>conv_layout</cite>. |
| For example, for layout ‘NCHW’ the shape should be (C, H, W).</li> |
| <li><strong>hidden_channels</strong> (<em>int</em>) – Number of output channels.</li> |
| <li><strong>i2h_kernel</strong> (<em>int or tuple of int</em>) – Input convolution kernel sizes.</li> |
| <li><strong>h2h_kernel</strong> (<em>int or tuple of int</em>) – Recurrent convolution kernel sizes. Only odd-numbered sizes are supported.</li> |
| <li><strong>i2h_pad</strong> (<em>int or tuple of int, default (0, 0)</em>) – Pad for input convolution.</li> |
| <li><strong>i2h_dilate</strong> (<em>int or tuple of int, default (1, 1)</em>) – Input convolution dilate.</li> |
| <li><strong>h2h_dilate</strong> (<em>int or tuple of int, default (1, 1)</em>) – Recurrent convolution dilate.</li> |
| <li><strong>i2h_weight_initializer</strong> (<em>str or Initializer</em>) – Initializer for the input weights matrix, used for the input convolutions.</li> |
| <li><strong>h2h_weight_initializer</strong> (<em>str or Initializer</em>) – Initializer for the recurrent weights matrix, used for the input convolutions.</li> |
| <li><strong>i2h_bias_initializer</strong> (<em>str or Initializer, default zeros</em>) – Initializer for the input convolution bias vectors.</li> |
| <li><strong>h2h_bias_initializer</strong> (<em>str or Initializer, default zeros</em>) – Initializer for the recurrent convolution bias vectors.</li> |
| <li><strong>conv_layout</strong> (<em>str, default 'NCHW'</em>) – Layout for all convolution inputs, outputs and weights. Options are ‘NCHW’ and ‘NHWC’.</li> |
| <li><strong>activation</strong> (<em>str or Block, default 'tanh'</em>) – Type of activation function used in n_t. |
| If argument type is string, it’s equivalent to nn.Activation(act_type=str). See |
| <a class="reference internal" href="../ndarray/ndarray.html#mxnet.ndarray.Activation" title="mxnet.ndarray.Activation"><code class="xref py py-func docutils literal"><span class="pre">Activation()</span></code></a> for available choices. |
| Alternatively, other activation blocks such as nn.LeakyReLU can be used.</li> |
| <li><strong>prefix</strong> (str, default ‘<a href="#id17"><span class="problematic" id="id18">conv_gru_</span></a>‘) – Prefix for name of layers (and name of weight if params is None).</li> |
| <li><strong>params</strong> (<em>RNNParams, default None</em>) – Container for weight sharing between cells. Created if None.</li> |
| </ul> |
| </td> |
| </tr> |
| </tbody> |
| </table> |
| </dd></dl> |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.rnn.Conv3DGRUCell"> |
| <em class="property">class </em><code class="descclassname">mxnet.gluon.contrib.rnn.</code><code class="descname">Conv3DGRUCell</code><span class="sig-paren">(</span><em>input_shape</em>, <em>hidden_channels</em>, <em>i2h_kernel</em>, <em>h2h_kernel</em>, <em>i2h_pad=(0</em>, <em>0</em>, <em>0)</em>, <em>i2h_dilate=(1</em>, <em>1</em>, <em>1)</em>, <em>h2h_dilate=(1</em>, <em>1</em>, <em>1)</em>, <em>i2h_weight_initializer=None</em>, <em>h2h_weight_initializer=None</em>, <em>i2h_bias_initializer='zeros'</em>, <em>h2h_bias_initializer='zeros'</em>, <em>conv_layout='NCDHW'</em>, <em>activation='tanh'</em>, <em>prefix=None</em>, <em>params=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/rnn/conv_rnn_cell.html#Conv3DGRUCell"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.rnn.Conv3DGRUCell" title="Permalink to this definition">¶</a></dt> |
| <dd><p>3D Convolutional Gated Rectified Unit (GRU) network cell.</p> |
| <div class="math"> |
| \[\begin{split}\begin{array}{ll} |
| r_t = \sigma(W_r \ast x_t + R_r \ast h_{t-1} + b_r) \\ |
| z_t = \sigma(W_z \ast x_t + R_z \ast h_{t-1} + b_z) \\ |
| n_t = tanh(W_i \ast x_t + b_i + r_t \circ (R_n \ast h_{t-1} + b_n)) \\ |
| h^\prime_t = (1 - z_t) \circ n_t + z_t \circ h \\ |
| \end{array}\end{split}\]</div> |
| <table class="docutils field-list" frame="void" rules="none"> |
| <col class="field-name"/> |
| <col class="field-body"/> |
| <tbody valign="top"> |
| <tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first last simple"> |
| <li><strong>input_shape</strong> (<em>tuple of int</em>) – Input tensor shape at each time step for each sample, excluding dimension of the batch size |
| and sequence length. Must be consistent with <cite>conv_layout</cite>. |
| For example, for layout ‘NCDHW’ the shape should be (C, D, H, W).</li> |
| <li><strong>hidden_channels</strong> (<em>int</em>) – Number of output channels.</li> |
| <li><strong>i2h_kernel</strong> (<em>int or tuple of int</em>) – Input convolution kernel sizes.</li> |
| <li><strong>h2h_kernel</strong> (<em>int or tuple of int</em>) – Recurrent convolution kernel sizes. Only odd-numbered sizes are supported.</li> |
| <li><strong>i2h_pad</strong> (<em>int or tuple of int, default (0, 0, 0)</em>) – Pad for input convolution.</li> |
| <li><strong>i2h_dilate</strong> (<em>int or tuple of int, default (1, 1, 1)</em>) – Input convolution dilate.</li> |
| <li><strong>h2h_dilate</strong> (<em>int or tuple of int, default (1, 1, 1)</em>) – Recurrent convolution dilate.</li> |
| <li><strong>i2h_weight_initializer</strong> (<em>str or Initializer</em>) – Initializer for the input weights matrix, used for the input convolutions.</li> |
| <li><strong>h2h_weight_initializer</strong> (<em>str or Initializer</em>) – Initializer for the recurrent weights matrix, used for the input convolutions.</li> |
| <li><strong>i2h_bias_initializer</strong> (<em>str or Initializer, default zeros</em>) – Initializer for the input convolution bias vectors.</li> |
| <li><strong>h2h_bias_initializer</strong> (<em>str or Initializer, default zeros</em>) – Initializer for the recurrent convolution bias vectors.</li> |
| <li><strong>conv_layout</strong> (<em>str, default 'NCDHW'</em>) – Layout for all convolution inputs, outputs and weights. Options are ‘NCDHW’ and ‘NDHWC’.</li> |
| <li><strong>activation</strong> (<em>str or Block, default 'tanh'</em>) – Type of activation function used in n_t. |
| If argument type is string, it’s equivalent to nn.Activation(act_type=str). See |
| <a class="reference internal" href="../ndarray/ndarray.html#mxnet.ndarray.Activation" title="mxnet.ndarray.Activation"><code class="xref py py-func docutils literal"><span class="pre">Activation()</span></code></a> for available choices. |
| Alternatively, other activation blocks such as nn.LeakyReLU can be used.</li> |
| <li><strong>prefix</strong> (str, default ‘<a href="#id19"><span class="problematic" id="id20">conv_gru_</span></a>‘) – Prefix for name of layers (and name of weight if params is None).</li> |
| <li><strong>params</strong> (<em>RNNParams, default None</em>) – Container for weight sharing between cells. Created if None.</li> |
| </ul> |
| </td> |
| </tr> |
| </tbody> |
| </table> |
| </dd></dl> |
| <dl class="class"> |
| <dt id="mxnet.gluon.contrib.rnn.VariationalDropoutCell"> |
| <em class="property">class </em><code class="descclassname">mxnet.gluon.contrib.rnn.</code><code class="descname">VariationalDropoutCell</code><span class="sig-paren">(</span><em>base_cell</em>, <em>drop_inputs=0.0</em>, <em>drop_states=0.0</em>, <em>drop_outputs=0.0</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/rnn/rnn_cell.html#VariationalDropoutCell"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.rnn.VariationalDropoutCell" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Applies Variational Dropout on base cell. |
| (<a class="reference external" href="https://arxiv.org/pdf/1512.05287.pdf">https://arxiv.org/pdf/1512.05287.pdf</a>,</p> |
| <blockquote> |
| <div><a class="reference external" href="https://www.stat.berkeley.edu/~tsmoon/files/Conference/asru2015.pdf">https://www.stat.berkeley.edu/~tsmoon/files/Conference/asru2015.pdf</a>).</div></blockquote> |
| <p>Variational dropout uses the same dropout mask across time-steps. It can be applied to RNN |
| inputs, outputs, and states. The masks for them are not shared.</p> |
| <p>The dropout mask is initialized when stepping forward for the first time and will remain |
| the same until .reset() is called. Thus, if using the cell and stepping manually without calling |
| .unroll(), the .reset() should be called after each sequence.</p> |
| <table class="docutils field-list" frame="void" rules="none"> |
| <col class="field-name"/> |
| <col class="field-body"/> |
| <tbody valign="top"> |
| <tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first last simple"> |
| <li><strong>base_cell</strong> (<a class="reference internal" href="rnn.html#mxnet.gluon.rnn.RecurrentCell" title="mxnet.gluon.rnn.RecurrentCell"><em>RecurrentCell</em></a>) – The cell on which to perform variational dropout.</li> |
| <li><strong>drop_inputs</strong> (<em>float, default 0.</em>) – The dropout rate for inputs. Won’t apply dropout if it equals 0.</li> |
| <li><strong>drop_states</strong> (<em>float, default 0.</em>) – The dropout rate for state inputs on the first state channel. |
| Won’t apply dropout if it equals 0.</li> |
| <li><strong>drop_outputs</strong> (<em>float, default 0.</em>) – The dropout rate for outputs. Won’t apply dropout if it equals 0.</li> |
| </ul> |
| </td> |
| </tr> |
| </tbody> |
| </table> |
| <dl class="method"> |
| <dt id="mxnet.gluon.contrib.rnn.VariationalDropoutCell.unroll"> |
| <code class="descname">unroll</code><span class="sig-paren">(</span><em>length</em>, <em>inputs</em>, <em>begin_state=None</em>, <em>layout='NTC'</em>, <em>merge_outputs=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/contrib/rnn/rnn_cell.html#VariationalDropoutCell.unroll"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.contrib.rnn.VariationalDropoutCell.unroll" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Unrolls an RNN cell across time steps.</p> |
| <table class="docutils field-list" frame="void" rules="none"> |
| <col class="field-name"/> |
| <col class="field-body"/> |
| <tbody valign="top"> |
| <tr class="field-odd field"><th class="field-name">Parameters:</th><td class="field-body"><ul class="first simple"> |
| <li><strong>length</strong> (<em>int</em>) – Number of steps to unroll.</li> |
| <li><strong>inputs</strong> (<em>Symbol, list of Symbol, or None</em>) – <p>If <cite>inputs</cite> is a single Symbol (usually the output |
| of Embedding symbol), it should have shape |
| (batch_size, length, ...) if <cite>layout</cite> is ‘NTC’, |
| or (length, batch_size, ...) if <cite>layout</cite> is ‘TNC’.</p> |
| <p>If <cite>inputs</cite> is a list of symbols (usually output of |
| previous unroll), they should all have shape |
| (batch_size, ...).</p> |
| </li> |
| <li><strong>begin_state</strong> (<em>nested list of Symbol, optional</em>) – Input states created by <cite>begin_state()</cite> |
| or output state of another cell. |
| Created from <cite>begin_state()</cite> if <cite>None</cite>.</li> |
| <li><strong>layout</strong> (<em>str, optional</em>) – <cite>layout</cite> of input symbol. Only used if inputs |
| is a single Symbol.</li> |
| <li><strong>merge_outputs</strong> (<em>bool, optional</em>) – If <cite>False</cite>, returns outputs as a list of Symbols. |
| If <cite>True</cite>, concatenates output across time steps |
| and returns a single symbol with shape |
| (batch_size, length, ...) if layout is ‘NTC’, |
| or (length, batch_size, ...) if layout is ‘TNC’. |
| If <cite>None</cite>, output whatever is faster.</li> |
| </ul> |
| </td> |
| </tr> |
| <tr class="field-even field"><th class="field-name">Returns:</th><td class="field-body"><p class="first last"><ul class="simple"> |
| <li><strong>outputs</strong> (<em>list of Symbol or Symbol</em>) – |
| Symbol (if <cite>merge_outputs</cite> is True) or list of Symbols |
| (if <cite>merge_outputs</cite> is False) corresponding to the output from |
| the RNN from this unrolling.</li> |
| <li><strong>states</strong> (<em>list of Symbol</em>) – |
| The new state of this RNN after this unrolling. |
| The type of this symbol is same as the output of <cite>begin_state()</cite>.</li> |
| </ul> |
| </p> |
| </td> |
| </tr> |
| </tbody> |
| </table> |
| </dd></dl> |
| </dd></dl> |
| <script>auto_index("api-reference");</script></div> |
| </div> |
| </div> |
| </div> |
| <div aria-label="main navigation" class="sphinxsidebar rightsidebar" role="navigation"> |
| <div class="sphinxsidebarwrapper"> |
| <h3><a href="../../../index.html">Table Of Contents</a></h3> |
| <ul> |
| <li><a class="reference internal" href="#">Gluon Contrib API</a><ul> |
| <li><a class="reference internal" href="#overview">Overview</a></li> |
| <li><a class="reference internal" href="#contrib">Contrib</a></li> |
| <li><a class="reference internal" href="#api-reference">API Reference</a></li> |
| </ul> |
| </li> |
| </ul> |
| </div> |
| </div> |
| </div><div class="footer"> |
| <div class="section-disclaimer"> |
| <div class="container"> |
| <div> |
| <img height="60" src="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/image/apache_incubator_logo.png"/> |
| <p> |
| Apache MXNet is an effort undergoing incubation at The Apache Software Foundation (ASF), <strong>sponsored by the <i>Apache Incubator</i></strong>. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF. |
| </p> |
| <p> |
| "Copyright © 2017, The Apache Software Foundation |
| Apache MXNet, MXNet, Apache, the Apache feather, and the Apache MXNet project logo are either registered trademarks or trademarks of the Apache Software Foundation." |
| </p> |
| </div> |
| </div> |
| </div> |
| </div> <!-- pagename != index --> |
| </div> |
| <script crossorigin="anonymous" integrity="sha384-0mSbJDEHialfmuBBQP6A4Qrprq5OVfW37PRR3j5ELqxss1yVqOtnepnHVP9aJ7xS" src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/js/bootstrap.min.js"></script> |
| <script src="../../../_static/js/sidebar.js" type="text/javascript"></script> |
| <script src="../../../_static/js/search.js" type="text/javascript"></script> |
| <script src="../../../_static/js/navbar.js" type="text/javascript"></script> |
| <script src="../../../_static/js/clipboard.min.js" type="text/javascript"></script> |
| <script src="../../../_static/js/copycode.js" type="text/javascript"></script> |
| <script src="../../../_static/js/page.js" type="text/javascript"></script> |
| <script type="text/javascript"> |
| $('body').ready(function () { |
| $('body').css('visibility', 'visible'); |
| }); |
| </script> |
| </body> |
| </html> |