| <!DOCTYPE html> |
| |
| <html lang="en"> |
| <head> |
| <meta charset="utf-8"/> |
| <meta content="IE=edge" http-equiv="X-UA-Compatible"/> |
| <meta content="width=device-width, initial-scale=1" name="viewport"/> |
| <meta content="Symbol - Neural network graphs and auto-differentiation" property="og:title"> |
| <meta content="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/image/og-logo.png" property="og:image"> |
| <meta content="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/image/og-logo.png" property="og:image:secure_url"> |
| <meta content="Symbol - Neural network graphs and auto-differentiation" property="og:description"/> |
| <title>Symbol - Neural network graphs and auto-differentiation — mxnet documentation</title> |
| <link crossorigin="anonymous" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css" integrity="sha384-1q8mTJOASx8j1Au+a5WDVnPi2lkFfwwEAa8hDDdjZlpLegxhjVME1fgjWPGmkzs7" rel="stylesheet"/> |
| <link href="https://maxcdn.bootstrapcdn.com/font-awesome/4.5.0/css/font-awesome.min.css" rel="stylesheet"/> |
| <link href="../../_static/basic.css" rel="stylesheet" type="text/css"> |
| <link href="../../_static/pygments.css" rel="stylesheet" type="text/css"> |
| <link href="../../_static/mxnet.css" rel="stylesheet" type="text/css"/> |
| <script type="text/javascript"> |
| var DOCUMENTATION_OPTIONS = { |
| URL_ROOT: '../../', |
| VERSION: '', |
| COLLAPSE_INDEX: false, |
| FILE_SUFFIX: '.html', |
| HAS_SOURCE: true, |
| SOURCELINK_SUFFIX: '.txt' |
| }; |
| </script> |
| <script src="https://code.jquery.com/jquery-1.11.1.min.js" type="text/javascript"></script> |
| <script src="../../_static/underscore.js" type="text/javascript"></script> |
| <script src="../../_static/searchtools_custom.js" type="text/javascript"></script> |
| <script src="../../_static/doctools.js" type="text/javascript"></script> |
| <script src="../../_static/selectlang.js" type="text/javascript"></script> |
| <script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"></script> |
| <script type="text/javascript"> jQuery(function() { Search.loadIndex("/versions/1.2.1/searchindex.js"); Search.init();}); </script> |
| <script> |
| (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ |
| (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new |
| Date();a=s.createElement(o), |
| m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) |
| })(window,document,'script','https://www.google-analytics.com/analytics.js','ga'); |
| |
| ga('create', 'UA-96378503-1', 'auto'); |
| ga('send', 'pageview'); |
| |
| </script> |
| <!-- --> |
| <!-- <script type="text/javascript" src="../../_static/jquery.js"></script> --> |
| <!-- --> |
| <!-- <script type="text/javascript" src="../../_static/underscore.js"></script> --> |
| <!-- --> |
| <!-- <script type="text/javascript" src="../../_static/doctools.js"></script> --> |
| <!-- --> |
| <!-- <script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.0/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script> --> |
| <!-- --> |
| <link href="../../genindex.html" rel="index" title="Index"> |
| <link href="../../search.html" rel="search" title="Search"/> |
| <link href="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/image/mxnet-icon.png" rel="icon" type="image/png"/> |
| </link></link></link></meta></meta></meta></head> |
| <body background="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/image/mxnet-background-compressed.jpeg" role="document"> |
| <div class="content-block"><div class="navbar navbar-fixed-top"> |
| <div class="container" id="navContainer"> |
| <div class="innder" id="header-inner"> |
| <h1 id="logo-wrap"> |
| <a href="../../" id="logo"><img src="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/image/mxnet_logo.png"/></a> |
| </h1> |
| <nav class="nav-bar" id="main-nav"> |
| <a class="main-nav-link" href="/versions/1.2.1/install/index.html">Install</a> |
| <span id="dropdown-menu-position-anchor"> |
| <a aria-expanded="true" aria-haspopup="true" class="main-nav-link dropdown-toggle" data-toggle="dropdown" href="#" role="button">Gluon <span class="caret"></span></a> |
| <ul class="dropdown-menu navbar-menu" id="package-dropdown-menu"> |
| <li><a class="main-nav-link" href="/versions/1.2.1/tutorials/gluon/gluon.html">About</a></li> |
| <li><a class="main-nav-link" href="https://www.d2l.ai/">Dive into Deep Learning</a></li> |
| <li><a class="main-nav-link" href="https://gluon-cv.mxnet.io">GluonCV Toolkit</a></li> |
| <li><a class="main-nav-link" href="https://gluon-nlp.mxnet.io/">GluonNLP Toolkit</a></li> |
| </ul> |
| </span> |
| <span id="dropdown-menu-position-anchor"> |
| <a aria-expanded="true" aria-haspopup="true" class="main-nav-link dropdown-toggle" data-toggle="dropdown" href="#" role="button">API <span class="caret"></span></a> |
| <ul class="dropdown-menu navbar-menu" id="package-dropdown-menu"> |
| <li><a class="main-nav-link" href="/versions/1.2.1/api/python/index.html">Python</a></li> |
| <li><a class="main-nav-link" href="/versions/1.2.1/api/c++/index.html">C++</a></li> |
| <li><a class="main-nav-link" href="/versions/1.2.1/api/julia/index.html">Julia</a></li> |
| <li><a class="main-nav-link" href="/versions/1.2.1/api/perl/index.html">Perl</a></li> |
| <li><a class="main-nav-link" href="/versions/1.2.1/api/r/index.html">R</a></li> |
| <li><a class="main-nav-link" href="/versions/1.2.1/api/scala/index.html">Scala</a></li> |
| </ul> |
| </span> |
| <span id="dropdown-menu-position-anchor-docs"> |
| <a aria-expanded="true" aria-haspopup="true" class="main-nav-link dropdown-toggle" data-toggle="dropdown" href="#" role="button">Docs <span class="caret"></span></a> |
| <ul class="dropdown-menu navbar-menu" id="package-dropdown-menu-docs"> |
| <li><a class="main-nav-link" href="/versions/1.2.1/faq/index.html">FAQ</a></li> |
| <li><a class="main-nav-link" href="/versions/1.2.1/tutorials/index.html">Tutorials</a> |
| <li><a class="main-nav-link" href="https://github.com/apache/incubator-mxnet/tree/1.2.1/example">Examples</a></li> |
| <li><a class="main-nav-link" href="/versions/1.2.1/architecture/index.html">Architecture</a></li> |
| <li><a class="main-nav-link" href="https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+Home">Developer Wiki</a></li> |
| <li><a class="main-nav-link" href="/versions/1.2.1/model_zoo/index.html">Model Zoo</a></li> |
| <li><a class="main-nav-link" href="https://github.com/onnx/onnx-mxnet">ONNX</a></li> |
| </li></ul> |
| </span> |
| <span id="dropdown-menu-position-anchor-community"> |
| <a aria-expanded="true" aria-haspopup="true" class="main-nav-link dropdown-toggle" data-toggle="dropdown" href="#" role="button">Community <span class="caret"></span></a> |
| <ul class="dropdown-menu navbar-menu" id="package-dropdown-menu-community"> |
| <li><a class="main-nav-link" href="http://discuss.mxnet.io">Forum</a></li> |
| <li><a class="main-nav-link" href="https://github.com/apache/incubator-mxnet/tree/1.2.1">Github</a></li> |
| <li><a class="main-nav-link" href="/versions/1.2.1/community/contribute.html">Contribute</a></li> |
| <li><a class="main-nav-link" href="/versions/1.2.1/community/powered_by.html">Powered By</a></li> |
| </ul> |
| </span> |
| <span id="dropdown-menu-position-anchor-version" style="position: relative"><a href="#" class="main-nav-link dropdown-toggle" data-toggle="dropdown" role="button" aria-haspopup="true" aria-expanded="true">1.2.1<span class="caret"></span></a><ul id="package-dropdown-menu" class="dropdown-menu"><li><a href="/">master</a></li><li><a href="/versions/1.7/">1.7</a></li><li><a href=/versions/1.6/>1.6</a></li><li><a href=/versions/1.5.0/>1.5.0</a></li><li><a href=/versions/1.4.1/>1.4.1</a></li><li><a href=/versions/1.3.1/>1.3.1</a></li><li><a href=/versions/1.2.1/>1.2.1</a></li><li><a href=/versions/1.1.0/>1.1.0</a></li><li><a href=/versions/1.0.0/>1.0.0</a></li><li><a href=/versions/0.12.1/>0.12.1</a></li><li><a href=/versions/0.11.0/>0.11.0</a></li></ul></span></nav> |
| <script> function getRootPath(){ return "../../" } </script> |
| <div class="burgerIcon dropdown"> |
| <a class="dropdown-toggle" data-toggle="dropdown" href="#" role="button">☰</a> |
| <ul class="dropdown-menu" id="burgerMenu"> |
| <li><a href="/versions/1.2.1/install/index.html">Install</a></li> |
| <li><a class="main-nav-link" href="/versions/1.2.1/tutorials/index.html">Tutorials</a></li> |
| <li class="dropdown-submenu dropdown"> |
| <a aria-expanded="true" aria-haspopup="true" class="dropdown-toggle burger-link" data-toggle="dropdown" href="#" tabindex="-1">Gluon</a> |
| <ul class="dropdown-menu navbar-menu" id="package-dropdown-menu"> |
| <li><a class="main-nav-link" href="/versions/1.2.1/tutorials/gluon/gluon.html">About</a></li> |
| <li><a class="main-nav-link" href="http://gluon.mxnet.io">The Straight Dope (Tutorials)</a></li> |
| <li><a class="main-nav-link" href="https://gluon-cv.mxnet.io">GluonCV Toolkit</a></li> |
| <li><a class="main-nav-link" href="https://gluon-nlp.mxnet.io/">GluonNLP Toolkit</a></li> |
| </ul> |
| </li> |
| <li class="dropdown-submenu"> |
| <a aria-expanded="true" aria-haspopup="true" class="dropdown-toggle burger-link" data-toggle="dropdown" href="#" tabindex="-1">API</a> |
| <ul class="dropdown-menu"> |
| <li><a class="main-nav-link" href="/versions/1.2.1/api/python/index.html">Python</a></li> |
| <li><a class="main-nav-link" href="/versions/1.2.1/api/c++/index.html">C++</a></li> |
| <li><a class="main-nav-link" href="/versions/1.2.1/api/julia/index.html">Julia</a></li> |
| <li><a class="main-nav-link" href="/versions/1.2.1/api/perl/index.html">Perl</a></li> |
| <li><a class="main-nav-link" href="/versions/1.2.1/api/r/index.html">R</a></li> |
| <li><a class="main-nav-link" href="/versions/1.2.1/api/scala/index.html">Scala</a></li> |
| </ul> |
| </li> |
| <li class="dropdown-submenu"> |
| <a aria-expanded="true" aria-haspopup="true" class="dropdown-toggle burger-link" data-toggle="dropdown" href="#" tabindex="-1">Docs</a> |
| <ul class="dropdown-menu"> |
| <li><a href="/versions/1.2.1/faq/index.html" tabindex="-1">FAQ</a></li> |
| <li><a href="/versions/1.2.1/tutorials/index.html" tabindex="-1">Tutorials</a></li> |
| <li><a href="https://github.com/apache/incubator-mxnet/tree/1.2.1/example" tabindex="-1">Examples</a></li> |
| <li><a href="/versions/1.2.1/architecture/index.html" tabindex="-1">Architecture</a></li> |
| <li><a href="https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+Home" tabindex="-1">Developer Wiki</a></li> |
| <li><a href="/versions/1.2.1/model_zoo/index.html" tabindex="-1">Gluon Model Zoo</a></li> |
| <li><a href="https://github.com/onnx/onnx-mxnet" tabindex="-1">ONNX</a></li> |
| </ul> |
| </li> |
| <li class="dropdown-submenu dropdown"> |
| <a aria-haspopup="true" class="dropdown-toggle burger-link" data-toggle="dropdown" href="#" role="button" tabindex="-1">Community</a> |
| <ul class="dropdown-menu"> |
| <li><a href="http://discuss.mxnet.io" tabindex="-1">Forum</a></li> |
| <li><a href="https://github.com/apache/incubator-mxnet/tree/1.2.1" tabindex="-1">Github</a></li> |
| <li><a href="/versions/1.2.1/community/contribute.html" tabindex="-1">Contribute</a></li> |
| <li><a href="/versions/1.2.1/community/powered_by.html" tabindex="-1">Powered By</a></li> |
| </ul> |
| </li> |
| <li id="dropdown-menu-position-anchor-version-mobile" class="dropdown-submenu" style="position: relative"><a href="#" tabindex="-1">1.2.1</a><ul class="dropdown-menu"><li><a tabindex="-1" href=/>master</a></li><li><a tabindex="-1" href=/versions/1.6/>1.6</a></li><li><a tabindex="-1" href=/versions/1.5.0/>1.5.0</a></li><li><a tabindex="-1" href=/versions/1.4.1/>1.4.1</a></li><li><a tabindex="-1" href=/versions/1.3.1/>1.3.1</a></li><li><a tabindex="-1" href=/versions/1.2.1/>1.2.1</a></li><li><a tabindex="-1" href=/versions/1.1.0/>1.1.0</a></li><li><a tabindex="-1" href=/versions/1.0.0/>1.0.0</a></li><li><a tabindex="-1" href=/versions/0.12.1/>0.12.1</a></li><li><a tabindex="-1" href=/versions/0.11.0/>0.11.0</a></li></ul></li></ul> |
| </div> |
| <div class="plusIcon dropdown"> |
| <a class="dropdown-toggle" data-toggle="dropdown" href="#" role="button"><span aria-hidden="true" class="glyphicon glyphicon-plus"></span></a> |
| <ul class="dropdown-menu dropdown-menu-right" id="plusMenu"></ul> |
| </div> |
| <div id="search-input-wrap"> |
| <form action="../../search.html" autocomplete="off" class="" method="get" role="search"> |
| <div class="form-group inner-addon left-addon"> |
| <i class="glyphicon glyphicon-search"></i> |
| <input class="form-control" name="q" placeholder="Search" type="text"/> |
| </div> |
| <input name="check_keywords" type="hidden" value="yes"> |
| <input name="area" type="hidden" value="default"/> |
| </input></form> |
| <div id="search-preview"></div> |
| </div> |
| <div id="searchIcon"> |
| <span aria-hidden="true" class="glyphicon glyphicon-search"></span> |
| </div> |
| <!-- <div id="lang-select-wrap"> --> |
| <!-- <label id="lang-select-label"> --> |
| <!-- <\!-- <i class="fa fa-globe"></i> -\-> --> |
| <!-- <span></span> --> |
| <!-- </label> --> |
| <!-- <select id="lang-select"> --> |
| <!-- <option value="en">Eng</option> --> |
| <!-- <option value="zh">中文</option> --> |
| <!-- </select> --> |
| <!-- </div> --> |
| <!-- <a id="mobile-nav-toggle"> |
| <span class="mobile-nav-toggle-bar"></span> |
| <span class="mobile-nav-toggle-bar"></span> |
| <span class="mobile-nav-toggle-bar"></span> |
| </a> --> |
| </div> |
| </div> |
| </div> |
| <script type="text/javascript"> |
| $('body').css('background', 'white'); |
| </script> |
| <div class="container"> |
| <div class="row"> |
| <div aria-label="main navigation" class="sphinxsidebar leftsidebar" role="navigation"> |
| <div class="sphinxsidebarwrapper"> |
| <ul> |
| <li class="toctree-l1"><a class="reference internal" href="../../api/python/index.html">Python Documents</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../../api/r/index.html">R Documents</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../../api/julia/index.html">Julia Documents</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../../api/c++/index.html">C++ Documents</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../../api/scala/index.html">Scala Documents</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../../api/perl/index.html">Perl Documents</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../../faq/index.html">HowTo Documents</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../../architecture/index.html">System Documents</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../index.html">Tutorials</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../../community/index.html">Community</a></li> |
| </ul> |
| </div> |
| </div> |
| <div class="content"> |
| <div class="page-tracker"></div> |
| <div class="section" id="symbol-neural-network-graphs-and-auto-differentiation"> |
| <span id="symbol-neural-network-graphs-and-auto-differentiation"></span><h1>Symbol - Neural network graphs and auto-differentiation<a class="headerlink" href="#symbol-neural-network-graphs-and-auto-differentiation" title="Permalink to this headline">¶</a></h1> |
| <p>In a <a class="reference external" href="/versions/1.2.1/tutorials/basic/ndarray.html">previous tutorial</a>, we introduced <code class="docutils literal"><span class="pre">NDArray</span></code>, |
| the basic data structure for manipulating data in MXNet. |
| And just using NDArray by itself, we can execute a wide range of mathematical operations. |
| In fact, we could define and update a full neural network just by using <code class="docutils literal"><span class="pre">NDArray</span></code>. |
| <code class="docutils literal"><span class="pre">NDArray</span></code> allows you to write programs for scientific computation |
| in an imperative fashion, making full use of the native control of any front-end language. |
| So you might wonder, why don’t we just use <code class="docutils literal"><span class="pre">NDArray</span></code> for all computation?</p> |
| <p>MXNet provides the Symbol API, an interface for symbolic programming. |
| With symbolic programming, rather than executing operations step by step, |
| we first define a <em>computation graph</em>. |
| This graph contains placeholders for inputs and designated outputs. |
| We can then compile the graph, yielding a function |
| that can be bound to <code class="docutils literal"><span class="pre">NDArray</span></code>s and run. |
| MXNet’s Symbol API is similar to the network configurations |
| used by <a class="reference external" href="http://caffe.berkeleyvision.org/">Caffe</a> |
| and the symbolic programming in <a class="reference external" href="http://deeplearning.net/software/theano/">Theano</a>.</p> |
| <p>Another advantage conferred by symbolic approach is that |
| we can optimize our functions before using them. |
| For example, when we execute mathematical computations in imperative fashion, |
| we don’t know at the time that we run each operation, |
| which values will be needed later on. |
| But with symbolic programming, we declare the required outputs in advance. |
| This means that we can recycle memory allocated in intermediate steps, |
| as by performing operations in place. Symbolic API also uses less memory for the |
| same network. Refer to <a class="reference external" href="/versions/1.2.1/faq/index.html">How To</a> and |
| <a class="reference external" href="/versions/1.2.1/architecture/index.html">Architecture</a> section to know more.</p> |
| <p>In our design notes, we present <a class="reference external" href="/versions/1.2.1/architecture/program_model.html">a more thorough discussion on the comparative strengths |
| of imperative and symbolic programing</a>. |
| But in this document, we’ll focus on teaching you how to use MXNet’s Symbol API. |
| In MXNet, we can compose Symbols from other Symbols, using operators, |
| such as simple matrix operations (e.g. “+”), |
| or whole neural network layers (e.g. convolution layer). |
| Operator can take multiple input variables, |
| can produce multiple output symbols |
| and can maintain internal state symbols.</p> |
| <p>For a visual explanation of these concepts, see |
| <a class="reference external" href="/versions/1.2.1/api/python/symbol_in_pictures/symbol_in_pictures.html">Symbolic Configuration and Execution in Pictures</a>.</p> |
| <p>To make things concrete, let’s take a hands-on look at the Symbol API. |
| There are a few different ways to compose a <code class="docutils literal"><span class="pre">Symbol</span></code>.</p> |
| <div class="section" id="prerequisites"> |
| <span id="prerequisites"></span><h2>Prerequisites<a class="headerlink" href="#prerequisites" title="Permalink to this headline">¶</a></h2> |
| <p>To complete this tutorial, we need:</p> |
| <ul> |
| <li><p class="first">MXNet. See the instructions for your operating system in <a class="reference external" href="/versions/1.2.1/install/index.html">Setup and Installation</a></p> |
| </li> |
| <li><p class="first"><a class="reference external" href="http://jupyter.org/">Jupyter</a></p> |
| <div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">pip</span> <span class="n">install</span> <span class="n">jupyter</span> |
| </pre></div> |
| </div> |
| </li> |
| <li><p class="first">GPUs - A section of this tutorial uses GPUs. If you don’t have GPUs on your machine, simply |
| set the variable gpu_device to mx.cpu().</p> |
| </li> |
| </ul> |
| </div> |
| <div class="section" id="basic-symbol-composition"> |
| <span id="basic-symbol-composition"></span><h2>Basic Symbol Composition<a class="headerlink" href="#basic-symbol-composition" title="Permalink to this headline">¶</a></h2> |
| <div class="section" id="basic-operators"> |
| <span id="basic-operators"></span><h3>Basic Operators<a class="headerlink" href="#basic-operators" title="Permalink to this headline">¶</a></h3> |
| <p>The following example builds a simple expression: <code class="docutils literal"><span class="pre">a</span> <span class="pre">+</span> <span class="pre">b</span></code>. |
| First, we create two placeholders with <code class="docutils literal"><span class="pre">mx.sym.Variable</span></code>, |
| giving them the names <code class="docutils literal"><span class="pre">a</span></code> and <code class="docutils literal"><span class="pre">b</span></code>. |
| We then construct the desired symbol by using the operator <code class="docutils literal"><span class="pre">+</span></code>. |
| We don’t need to name our variables while creating them, |
| MXNet will automatically generate a unique name for each. |
| In the example below, <code class="docutils literal"><span class="pre">c</span></code> is assigned a unique name automatically.</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">mxnet</span> <span class="kn">as</span> <span class="nn">mx</span> |
| <span class="n">a</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="s1">'a'</span><span class="p">)</span> |
| <span class="n">b</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="s1">'b'</span><span class="p">)</span> |
| <span class="n">c</span> <span class="o">=</span> <span class="n">a</span> <span class="o">+</span> <span class="n">b</span> |
| <span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span> |
| </pre></div> |
| </div> |
| <p>Most operators supported by <code class="docutils literal"><span class="pre">NDArray</span></code> are also supported by <code class="docutils literal"><span class="pre">Symbol</span></code>, for example:</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="c1"># elemental wise multiplication</span> |
| <span class="n">d</span> <span class="o">=</span> <span class="n">a</span> <span class="o">*</span> <span class="n">b</span> |
| <span class="c1"># matrix multiplication</span> |
| <span class="n">e</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">dot</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">)</span> |
| <span class="c1"># reshape</span> |
| <span class="n">f</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="n">d</span><span class="o">+</span><span class="n">e</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span><span class="mi">4</span><span class="p">))</span> |
| <span class="c1"># broadcast</span> |
| <span class="n">g</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">broadcast_to</span><span class="p">(</span><span class="n">f</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">(</span><span class="mi">2</span><span class="p">,</span><span class="mi">4</span><span class="p">))</span> |
| <span class="c1"># plot</span> |
| <span class="n">mx</span><span class="o">.</span><span class="n">viz</span><span class="o">.</span><span class="n">plot_network</span><span class="p">(</span><span class="n">symbol</span><span class="o">=</span><span class="n">g</span><span class="p">)</span> |
| </pre></div> |
| </div> |
| <p>The computations declared in the above examples can be bound to the input data |
| for evaluation by using <code class="docutils literal"><span class="pre">bind</span></code> method. We discuss this further in the |
| [symbol manipulation](#Symbol Manipulation) section.</p> |
| </div> |
| <div class="section" id="basic-neural-networks"> |
| <span id="basic-neural-networks"></span><h3>Basic Neural Networks<a class="headerlink" href="#basic-neural-networks" title="Permalink to this headline">¶</a></h3> |
| <p>Besides the basic operators, <code class="docutils literal"><span class="pre">Symbol</span></code> also supports a rich set of neural network layers. |
| The following example constructs a two layer fully connected neural network |
| and then visualizes the structure of that network given the input data shape.</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">net</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="s1">'data'</span><span class="p">)</span> |
| <span class="n">net</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">FullyConnected</span><span class="p">(</span><span class="n">data</span><span class="o">=</span><span class="n">net</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="s1">'fc1'</span><span class="p">,</span> <span class="n">num_hidden</span><span class="o">=</span><span class="mi">128</span><span class="p">)</span> |
| <span class="n">net</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">Activation</span><span class="p">(</span><span class="n">data</span><span class="o">=</span><span class="n">net</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="s1">'relu1'</span><span class="p">,</span> <span class="n">act_type</span><span class="o">=</span><span class="s2">"relu"</span><span class="p">)</span> |
| <span class="n">net</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">FullyConnected</span><span class="p">(</span><span class="n">data</span><span class="o">=</span><span class="n">net</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="s1">'fc2'</span><span class="p">,</span> <span class="n">num_hidden</span><span class="o">=</span><span class="mi">10</span><span class="p">)</span> |
| <span class="n">net</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">SoftmaxOutput</span><span class="p">(</span><span class="n">data</span><span class="o">=</span><span class="n">net</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="s1">'out'</span><span class="p">)</span> |
| <span class="n">mx</span><span class="o">.</span><span class="n">viz</span><span class="o">.</span><span class="n">plot_network</span><span class="p">(</span><span class="n">net</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">{</span><span class="s1">'data'</span><span class="p">:(</span><span class="mi">100</span><span class="p">,</span><span class="mi">200</span><span class="p">)})</span> |
| </pre></div> |
| </div> |
| <p>Each symbol takes a (unique) string name. NDArray and Symbol both represent |
| a single tensor. <em>Operators</em> represent the computation between tensors. |
| Operators take symbol (or NDArray) as inputs and might also additionally accept |
| other hyperparameters such as the number of hidden neurons (<em>num_hidden</em>) or the |
| activation type (<em>act_type</em>) and produce the output.</p> |
| <p>We can view a symbol simply as a function taking several arguments. |
| And we can retrieve those arguments with the following method call:</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">net</span><span class="o">.</span><span class="n">list_arguments</span><span class="p">()</span> |
| </pre></div> |
| </div> |
| <p>These arguments are the parameters and inputs needed by each symbol:</p> |
| <ul class="simple"> |
| <li><em>data</em>: Input data needed by the variable <em>data</em>.</li> |
| <li><em>fc1_weight</em> and <em>fc1_bias</em>: The weight and bias for the first fully connected layer <em>fc1</em>.</li> |
| <li><em>fc2_weight</em> and <em>fc2_bias</em>: The weight and bias for the second fully connected layer <em>fc2</em>.</li> |
| <li><em>out_label</em>: The label needed by the loss.</li> |
| </ul> |
| <p>We can also specify the names explicitly:</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">net</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">symbol</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="s1">'data'</span><span class="p">)</span> |
| <span class="n">w</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">symbol</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="s1">'myweight'</span><span class="p">)</span> |
| <span class="n">net</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">symbol</span><span class="o">.</span><span class="n">FullyConnected</span><span class="p">(</span><span class="n">data</span><span class="o">=</span><span class="n">net</span><span class="p">,</span> <span class="n">weight</span><span class="o">=</span><span class="n">w</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="s1">'fc1'</span><span class="p">,</span> <span class="n">num_hidden</span><span class="o">=</span><span class="mi">128</span><span class="p">)</span> |
| <span class="n">net</span><span class="o">.</span><span class="n">list_arguments</span><span class="p">()</span> |
| </pre></div> |
| </div> |
| <p>In the above example, <code class="docutils literal"><span class="pre">FullyConnected</span></code> layer has 3 inputs: data, weight, bias. |
| When any input is not specified, a variable will be automatically generated for it.</p> |
| </div> |
| </div> |
| <div class="section" id="more-complicated-composition"> |
| <span id="more-complicated-composition"></span><h2>More Complicated Composition<a class="headerlink" href="#more-complicated-composition" title="Permalink to this headline">¶</a></h2> |
| <p>MXNet provides well-optimized symbols for layers commonly used in deep learning |
| (see <a class="reference external" href="https://github.com/dmlc/mxnet/tree/master/src/operator">src/operator</a>). |
| We can also define new operators in Python. The following example first |
| performs an element-wise add between two symbols, then feeds them to the fully |
| connected operator:</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">lhs</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">symbol</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="s1">'data1'</span><span class="p">)</span> |
| <span class="n">rhs</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">symbol</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="s1">'data2'</span><span class="p">)</span> |
| <span class="n">net</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">symbol</span><span class="o">.</span><span class="n">FullyConnected</span><span class="p">(</span><span class="n">data</span><span class="o">=</span><span class="n">lhs</span> <span class="o">+</span> <span class="n">rhs</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="s1">'fc1'</span><span class="p">,</span> <span class="n">num_hidden</span><span class="o">=</span><span class="mi">128</span><span class="p">)</span> |
| <span class="n">net</span><span class="o">.</span><span class="n">list_arguments</span><span class="p">()</span> |
| </pre></div> |
| </div> |
| <p>We can also construct a symbol in a more flexible way than the single forward |
| composition depicted in the preceding example:</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">data</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">symbol</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="s1">'data'</span><span class="p">)</span> |
| <span class="n">net1</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">symbol</span><span class="o">.</span><span class="n">FullyConnected</span><span class="p">(</span><span class="n">data</span><span class="o">=</span><span class="n">data</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="s1">'fc1'</span><span class="p">,</span> <span class="n">num_hidden</span><span class="o">=</span><span class="mi">10</span><span class="p">)</span> |
| <span class="n">net1</span><span class="o">.</span><span class="n">list_arguments</span><span class="p">()</span> |
| <span class="n">net2</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">symbol</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="s1">'data2'</span><span class="p">)</span> |
| <span class="n">net2</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">symbol</span><span class="o">.</span><span class="n">FullyConnected</span><span class="p">(</span><span class="n">data</span><span class="o">=</span><span class="n">net2</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="s1">'fc2'</span><span class="p">,</span> <span class="n">num_hidden</span><span class="o">=</span><span class="mi">10</span><span class="p">)</span> |
| <span class="n">composed</span> <span class="o">=</span> <span class="n">net2</span><span class="p">(</span><span class="n">data2</span><span class="o">=</span><span class="n">net1</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="s1">'composed'</span><span class="p">)</span> |
| <span class="n">composed</span><span class="o">.</span><span class="n">list_arguments</span><span class="p">()</span> |
| </pre></div> |
| </div> |
| <p>In this example, <em>net2</em> is used as a function to apply to an existing symbol <em>net1</em>, |
| and the resulting <em>composed</em> symbol will have all the attributes of <em>net1</em> and <em>net2</em>.</p> |
| <p>Once you start building some bigger networks, you might want to name some |
| symbols with a common prefix to outline the structure of your network. |
| You can use the |
| <a class="reference external" href="https://github.com/dmlc/mxnet/blob/1.2.1/python/mxnet/name.py">Prefix</a> |
| NameManager as follows:</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">data</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="s2">"data"</span><span class="p">)</span> |
| <span class="n">net</span> <span class="o">=</span> <span class="n">data</span> |
| <span class="n">n_layer</span> <span class="o">=</span> <span class="mi">2</span> |
| <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">n_layer</span><span class="p">):</span> |
| <span class="k">with</span> <span class="n">mx</span><span class="o">.</span><span class="n">name</span><span class="o">.</span><span class="n">Prefix</span><span class="p">(</span><span class="s2">"layer</span><span class="si">%d</span><span class="s2">_"</span> <span class="o">%</span> <span class="p">(</span><span class="n">i</span> <span class="o">+</span> <span class="mi">1</span><span class="p">)):</span> |
| <span class="n">net</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">FullyConnected</span><span class="p">(</span><span class="n">data</span><span class="o">=</span><span class="n">net</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="s2">"fc"</span><span class="p">,</span> <span class="n">num_hidden</span><span class="o">=</span><span class="mi">100</span><span class="p">)</span> |
| <span class="n">net</span><span class="o">.</span><span class="n">list_arguments</span><span class="p">()</span> |
| </pre></div> |
| </div> |
| <div class="section" id="modularized-construction-for-deep-networks"> |
| <span id="modularized-construction-for-deep-networks"></span><h3>Modularized Construction for Deep Networks<a class="headerlink" href="#modularized-construction-for-deep-networks" title="Permalink to this headline">¶</a></h3> |
| <p>Constructing a <em>deep</em> network layer by layer, (like the Google Inception network), |
| can be tedious owing to the large number of layers. |
| So, for such networks, we often modularize the construction.</p> |
| <p>For example, in Google Inception network, |
| we can first define a factory function which chains the convolution, |
| batch normalization and rectified linear unit (ReLU) activation layers together.</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="k">def</span> <span class="nf">ConvFactory</span><span class="p">(</span><span class="n">data</span><span class="p">,</span> <span class="n">num_filter</span><span class="p">,</span> <span class="n">kernel</span><span class="p">,</span> <span class="n">stride</span><span class="o">=</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span><span class="mi">1</span><span class="p">),</span> <span class="n">pad</span><span class="o">=</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="mi">0</span><span class="p">),</span><span class="n">name</span><span class="o">=</span><span class="bp">None</span><span class="p">,</span> <span class="n">suffix</span><span class="o">=</span><span class="s1">''</span><span class="p">):</span> |
| <span class="n">conv</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">Convolution</span><span class="p">(</span><span class="n">data</span><span class="o">=</span><span class="n">data</span><span class="p">,</span> <span class="n">num_filter</span><span class="o">=</span><span class="n">num_filter</span><span class="p">,</span> <span class="n">kernel</span><span class="o">=</span><span class="n">kernel</span><span class="p">,</span> |
| <span class="n">stride</span><span class="o">=</span><span class="n">stride</span><span class="p">,</span> <span class="n">pad</span><span class="o">=</span><span class="n">pad</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="s1">'conv_</span><span class="si">%s%s</span><span class="s1">'</span> <span class="o">%</span><span class="p">(</span><span class="n">name</span><span class="p">,</span> <span class="n">suffix</span><span class="p">))</span> |
| <span class="n">bn</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">BatchNorm</span><span class="p">(</span><span class="n">data</span><span class="o">=</span><span class="n">conv</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="s1">'bn_</span><span class="si">%s%s</span><span class="s1">'</span> <span class="o">%</span><span class="p">(</span><span class="n">name</span><span class="p">,</span> <span class="n">suffix</span><span class="p">))</span> |
| <span class="n">act</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">Activation</span><span class="p">(</span><span class="n">data</span><span class="o">=</span><span class="n">bn</span><span class="p">,</span> <span class="n">act_type</span><span class="o">=</span><span class="s1">'relu'</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="s1">'relu_</span><span class="si">%s%s</span><span class="s1">'</span> |
| <span class="o">%</span><span class="p">(</span><span class="n">name</span><span class="p">,</span> <span class="n">suffix</span><span class="p">))</span> |
| <span class="k">return</span> <span class="n">act</span> |
| <span class="n">prev</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="s2">"Previous Output"</span><span class="p">)</span> |
| <span class="n">conv_comp</span> <span class="o">=</span> <span class="n">ConvFactory</span><span class="p">(</span><span class="n">data</span><span class="o">=</span><span class="n">prev</span><span class="p">,</span> <span class="n">num_filter</span><span class="o">=</span><span class="mi">64</span><span class="p">,</span> <span class="n">kernel</span><span class="o">=</span><span class="p">(</span><span class="mi">7</span><span class="p">,</span><span class="mi">7</span><span class="p">),</span> <span class="n">stride</span><span class="o">=</span><span class="p">(</span><span class="mi">2</span><span class="p">,</span> <span class="mi">2</span><span class="p">))</span> |
| <span class="n">shape</span> <span class="o">=</span> <span class="p">{</span><span class="s2">"Previous Output"</span> <span class="p">:</span> <span class="p">(</span><span class="mi">128</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">28</span><span class="p">,</span> <span class="mi">28</span><span class="p">)}</span> |
| <span class="n">mx</span><span class="o">.</span><span class="n">viz</span><span class="o">.</span><span class="n">plot_network</span><span class="p">(</span><span class="n">symbol</span><span class="o">=</span><span class="n">conv_comp</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="n">shape</span><span class="p">)</span> |
| </pre></div> |
| </div> |
| <p>Then we can define a function that constructs an inception module based on |
| factory function <code class="docutils literal"><span class="pre">ConvFactory</span></code>.</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="k">def</span> <span class="nf">InceptionFactoryA</span><span class="p">(</span><span class="n">data</span><span class="p">,</span> <span class="n">num_1x1</span><span class="p">,</span> <span class="n">num_3x3red</span><span class="p">,</span> <span class="n">num_3x3</span><span class="p">,</span> <span class="n">num_d3x3red</span><span class="p">,</span> <span class="n">num_d3x3</span><span class="p">,</span> |
| <span class="n">pool</span><span class="p">,</span> <span class="n">proj</span><span class="p">,</span> <span class="n">name</span><span class="p">):</span> |
| <span class="c1"># 1x1</span> |
| <span class="n">c1x1</span> <span class="o">=</span> <span class="n">ConvFactory</span><span class="p">(</span><span class="n">data</span><span class="o">=</span><span class="n">data</span><span class="p">,</span> <span class="n">num_filter</span><span class="o">=</span><span class="n">num_1x1</span><span class="p">,</span> <span class="n">kernel</span><span class="o">=</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">),</span> <span class="n">name</span><span class="o">=</span><span class="p">(</span><span class="s1">'</span><span class="si">%s</span><span class="s1">_1x1'</span> <span class="o">%</span> <span class="n">name</span><span class="p">))</span> |
| <span class="c1"># 3x3 reduce + 3x3</span> |
| <span class="n">c3x3r</span> <span class="o">=</span> <span class="n">ConvFactory</span><span class="p">(</span><span class="n">data</span><span class="o">=</span><span class="n">data</span><span class="p">,</span> <span class="n">num_filter</span><span class="o">=</span><span class="n">num_3x3red</span><span class="p">,</span> <span class="n">kernel</span><span class="o">=</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">),</span> <span class="n">name</span><span class="o">=</span><span class="p">(</span><span class="s1">'</span><span class="si">%s</span><span class="s1">_3x3'</span> <span class="o">%</span> <span class="n">name</span><span class="p">),</span> <span class="n">suffix</span><span class="o">=</span><span class="s1">'_reduce'</span><span class="p">)</span> |
| <span class="n">c3x3</span> <span class="o">=</span> <span class="n">ConvFactory</span><span class="p">(</span><span class="n">data</span><span class="o">=</span><span class="n">c3x3r</span><span class="p">,</span> <span class="n">num_filter</span><span class="o">=</span><span class="n">num_3x3</span><span class="p">,</span> <span class="n">kernel</span><span class="o">=</span><span class="p">(</span><span class="mi">3</span><span class="p">,</span> <span class="mi">3</span><span class="p">),</span> <span class="n">pad</span><span class="o">=</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">),</span> <span class="n">name</span><span class="o">=</span><span class="p">(</span><span class="s1">'</span><span class="si">%s</span><span class="s1">_3x3'</span> <span class="o">%</span> <span class="n">name</span><span class="p">))</span> |
| <span class="c1"># double 3x3 reduce + double 3x3</span> |
| <span class="n">cd3x3r</span> <span class="o">=</span> <span class="n">ConvFactory</span><span class="p">(</span><span class="n">data</span><span class="o">=</span><span class="n">data</span><span class="p">,</span> <span class="n">num_filter</span><span class="o">=</span><span class="n">num_d3x3red</span><span class="p">,</span> <span class="n">kernel</span><span class="o">=</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">),</span> <span class="n">name</span><span class="o">=</span><span class="p">(</span><span class="s1">'</span><span class="si">%s</span><span class="s1">_double_3x3'</span> <span class="o">%</span> <span class="n">name</span><span class="p">),</span> <span class="n">suffix</span><span class="o">=</span><span class="s1">'_reduce'</span><span class="p">)</span> |
| <span class="n">cd3x3</span> <span class="o">=</span> <span class="n">ConvFactory</span><span class="p">(</span><span class="n">data</span><span class="o">=</span><span class="n">cd3x3r</span><span class="p">,</span> <span class="n">num_filter</span><span class="o">=</span><span class="n">num_d3x3</span><span class="p">,</span> <span class="n">kernel</span><span class="o">=</span><span class="p">(</span><span class="mi">3</span><span class="p">,</span> <span class="mi">3</span><span class="p">),</span> <span class="n">pad</span><span class="o">=</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">),</span> <span class="n">name</span><span class="o">=</span><span class="p">(</span><span class="s1">'</span><span class="si">%s</span><span class="s1">_double_3x3_0'</span> <span class="o">%</span> <span class="n">name</span><span class="p">))</span> |
| <span class="n">cd3x3</span> <span class="o">=</span> <span class="n">ConvFactory</span><span class="p">(</span><span class="n">data</span><span class="o">=</span><span class="n">cd3x3</span><span class="p">,</span> <span class="n">num_filter</span><span class="o">=</span><span class="n">num_d3x3</span><span class="p">,</span> <span class="n">kernel</span><span class="o">=</span><span class="p">(</span><span class="mi">3</span><span class="p">,</span> <span class="mi">3</span><span class="p">),</span> <span class="n">pad</span><span class="o">=</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">),</span> <span class="n">name</span><span class="o">=</span><span class="p">(</span><span class="s1">'</span><span class="si">%s</span><span class="s1">_double_3x3_1'</span> <span class="o">%</span> <span class="n">name</span><span class="p">))</span> |
| <span class="c1"># pool + proj</span> |
| <span class="n">pooling</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">Pooling</span><span class="p">(</span><span class="n">data</span><span class="o">=</span><span class="n">data</span><span class="p">,</span> <span class="n">kernel</span><span class="o">=</span><span class="p">(</span><span class="mi">3</span><span class="p">,</span> <span class="mi">3</span><span class="p">),</span> <span class="n">stride</span><span class="o">=</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">),</span> <span class="n">pad</span><span class="o">=</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">),</span> <span class="n">pool_type</span><span class="o">=</span><span class="n">pool</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="p">(</span><span class="s1">'</span><span class="si">%s</span><span class="s1">_pool_</span><span class="si">%s</span><span class="s1">_pool'</span> <span class="o">%</span> <span class="p">(</span><span class="n">pool</span><span class="p">,</span> <span class="n">name</span><span class="p">)))</span> |
| <span class="n">cproj</span> <span class="o">=</span> <span class="n">ConvFactory</span><span class="p">(</span><span class="n">data</span><span class="o">=</span><span class="n">pooling</span><span class="p">,</span> <span class="n">num_filter</span><span class="o">=</span><span class="n">proj</span><span class="p">,</span> <span class="n">kernel</span><span class="o">=</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">),</span> <span class="n">name</span><span class="o">=</span><span class="p">(</span><span class="s1">'</span><span class="si">%s</span><span class="s1">_proj'</span> <span class="o">%</span> <span class="n">name</span><span class="p">))</span> |
| <span class="c1"># concat</span> |
| <span class="n">concat</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">Concat</span><span class="p">(</span><span class="o">*</span><span class="p">[</span><span class="n">c1x1</span><span class="p">,</span> <span class="n">c3x3</span><span class="p">,</span> <span class="n">cd3x3</span><span class="p">,</span> <span class="n">cproj</span><span class="p">],</span> <span class="n">name</span><span class="o">=</span><span class="s1">'ch_concat_</span><span class="si">%s</span><span class="s1">_chconcat'</span> <span class="o">%</span> <span class="n">name</span><span class="p">)</span> |
| <span class="k">return</span> <span class="n">concat</span> |
| <span class="n">prev</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="s2">"Previous Output"</span><span class="p">)</span> |
| <span class="n">in3a</span> <span class="o">=</span> <span class="n">InceptionFactoryA</span><span class="p">(</span><span class="n">prev</span><span class="p">,</span> <span class="mi">64</span><span class="p">,</span> <span class="mi">64</span><span class="p">,</span> <span class="mi">64</span><span class="p">,</span> <span class="mi">64</span><span class="p">,</span> <span class="mi">96</span><span class="p">,</span> <span class="s2">"avg"</span><span class="p">,</span> <span class="mi">32</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="s2">"in3a"</span><span class="p">)</span> |
| <span class="n">mx</span><span class="o">.</span><span class="n">viz</span><span class="o">.</span><span class="n">plot_network</span><span class="p">(</span><span class="n">symbol</span><span class="o">=</span><span class="n">in3a</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="n">shape</span><span class="p">)</span> |
| </pre></div> |
| </div> |
| <p>Finally, we can obtain the whole network by chaining multiple inception |
| modules. See a complete example |
| <a class="reference external" href="https://github.com/dmlc/mxnet/blob/master/example/image-classification/symbols/inception-bn.py">here</a>.</p> |
| </div> |
| <div class="section" id="group-multiple-symbols"> |
| <span id="group-multiple-symbols"></span><h3>Group Multiple Symbols<a class="headerlink" href="#group-multiple-symbols" title="Permalink to this headline">¶</a></h3> |
| <p>To construct neural networks with multiple loss layers, we can use |
| <code class="docutils literal"><span class="pre">mxnet.sym.Group</span></code> to group multiple symbols together. The following example |
| groups two outputs:</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">net</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="s1">'data'</span><span class="p">)</span> |
| <span class="n">fc1</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">FullyConnected</span><span class="p">(</span><span class="n">data</span><span class="o">=</span><span class="n">net</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="s1">'fc1'</span><span class="p">,</span> <span class="n">num_hidden</span><span class="o">=</span><span class="mi">128</span><span class="p">)</span> |
| <span class="n">net</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">Activation</span><span class="p">(</span><span class="n">data</span><span class="o">=</span><span class="n">fc1</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="s1">'relu1'</span><span class="p">,</span> <span class="n">act_type</span><span class="o">=</span><span class="s2">"relu"</span><span class="p">)</span> |
| <span class="n">out1</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">SoftmaxOutput</span><span class="p">(</span><span class="n">data</span><span class="o">=</span><span class="n">net</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="s1">'softmax'</span><span class="p">)</span> |
| <span class="n">out2</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">LinearRegressionOutput</span><span class="p">(</span><span class="n">data</span><span class="o">=</span><span class="n">net</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="s1">'regression'</span><span class="p">)</span> |
| <span class="n">group</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">Group</span><span class="p">([</span><span class="n">out1</span><span class="p">,</span> <span class="n">out2</span><span class="p">])</span> |
| <span class="n">group</span><span class="o">.</span><span class="n">list_outputs</span><span class="p">()</span> |
| </pre></div> |
| </div> |
| </div> |
| </div> |
| <div class="section" id="relations-to-ndarray"> |
| <span id="relations-to-ndarray"></span><h2>Relations to NDArray<a class="headerlink" href="#relations-to-ndarray" title="Permalink to this headline">¶</a></h2> |
| <p>As you can see now, both <code class="docutils literal"><span class="pre">Symbol</span></code> and <code class="docutils literal"><span class="pre">NDArray</span></code> provide multi-dimensional array |
| operations, such as <code class="docutils literal"><span class="pre">c</span> <span class="pre">=</span> <span class="pre">a</span> <span class="pre">+</span> <span class="pre">b</span></code> in MXNet. We briefly clarify the differences here.</p> |
| <p>The <code class="docutils literal"><span class="pre">NDArray</span></code> provides an imperative programming alike interface, in which the |
| computations are evaluated sentence by sentence. While <code class="docutils literal"><span class="pre">Symbol</span></code> is closer to |
| declarative programming, in which we first declare the computation and then |
| evaluate with data. Examples in this category include regular expressions and |
| SQL.</p> |
| <p>The pros for <code class="docutils literal"><span class="pre">NDArray</span></code>:</p> |
| <ul class="simple"> |
| <li>Straightforward.</li> |
| <li>Easy to work with native language features (for loop, if-else condition, ..) |
| and libraries (numpy, ..).</li> |
| <li>Easy step-by-step code debugging.</li> |
| </ul> |
| <p>The pros for <code class="docutils literal"><span class="pre">Symbol</span></code>:</p> |
| <ul class="simple"> |
| <li>Provides almost all functionalities of NDArray, such as <code class="docutils literal"><span class="pre">+</span></code>, <code class="docutils literal"><span class="pre">*</span></code>, <code class="docutils literal"><span class="pre">sin</span></code>, |
| <code class="docutils literal"><span class="pre">reshape</span></code> etc.</li> |
| <li>Easy to save, load and visualize.</li> |
| <li>Easy for the backend to optimize the computation and memory usage.</li> |
| </ul> |
| </div> |
| <div class="section" id="symbol-manipulation"> |
| <span id="symbol-manipulation"></span><h2>Symbol Manipulation<a class="headerlink" href="#symbol-manipulation" title="Permalink to this headline">¶</a></h2> |
| <p>One important difference of <code class="docutils literal"><span class="pre">Symbol</span></code> compared to <code class="docutils literal"><span class="pre">NDArray</span></code> is that we first |
| declare the computation and then bind the computation with data to run.</p> |
| <p>In this section, we introduce the functions to manipulate a symbol directly. But |
| note that, most of them are wrapped by the <code class="docutils literal"><span class="pre">module</span></code> package.</p> |
| <div class="section" id="shape-and-type-inference"> |
| <span id="shape-and-type-inference"></span><h3>Shape and Type Inference<a class="headerlink" href="#shape-and-type-inference" title="Permalink to this headline">¶</a></h3> |
| <p>For each symbol, we can query its arguments, auxiliary states and outputs. |
| We can also infer the output shape and type of the symbol given the known input |
| shape or type of some arguments, which facilitates memory allocation.</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">arg_name</span> <span class="o">=</span> <span class="n">c</span><span class="o">.</span><span class="n">list_arguments</span><span class="p">()</span> <span class="c1"># get the names of the inputs</span> |
| <span class="n">out_name</span> <span class="o">=</span> <span class="n">c</span><span class="o">.</span><span class="n">list_outputs</span><span class="p">()</span> <span class="c1"># get the names of the outputs</span> |
| <span class="c1"># infers output shape given the shape of input arguments</span> |
| <span class="n">arg_shape</span><span class="p">,</span> <span class="n">out_shape</span><span class="p">,</span> <span class="n">_</span> <span class="o">=</span> <span class="n">c</span><span class="o">.</span><span class="n">infer_shape</span><span class="p">(</span><span class="n">a</span><span class="o">=</span><span class="p">(</span><span class="mi">2</span><span class="p">,</span><span class="mi">3</span><span class="p">),</span> <span class="n">b</span><span class="o">=</span><span class="p">(</span><span class="mi">2</span><span class="p">,</span><span class="mi">3</span><span class="p">))</span> |
| <span class="c1"># infers output type given the type of input arguments</span> |
| <span class="n">arg_type</span><span class="p">,</span> <span class="n">out_type</span><span class="p">,</span> <span class="n">_</span> <span class="o">=</span> <span class="n">c</span><span class="o">.</span><span class="n">infer_type</span><span class="p">(</span><span class="n">a</span><span class="o">=</span><span class="s1">'float32'</span><span class="p">,</span> <span class="n">b</span><span class="o">=</span><span class="s1">'float32'</span><span class="p">)</span> |
| <span class="p">{</span><span class="s1">'input'</span> <span class="p">:</span> <span class="nb">dict</span><span class="p">(</span><span class="nb">zip</span><span class="p">(</span><span class="n">arg_name</span><span class="p">,</span> <span class="n">arg_shape</span><span class="p">)),</span> |
| <span class="s1">'output'</span> <span class="p">:</span> <span class="nb">dict</span><span class="p">(</span><span class="nb">zip</span><span class="p">(</span><span class="n">out_name</span><span class="p">,</span> <span class="n">out_shape</span><span class="p">))}</span> |
| <span class="p">{</span><span class="s1">'input'</span> <span class="p">:</span> <span class="nb">dict</span><span class="p">(</span><span class="nb">zip</span><span class="p">(</span><span class="n">arg_name</span><span class="p">,</span> <span class="n">arg_type</span><span class="p">)),</span> |
| <span class="s1">'output'</span> <span class="p">:</span> <span class="nb">dict</span><span class="p">(</span><span class="nb">zip</span><span class="p">(</span><span class="n">out_name</span><span class="p">,</span> <span class="n">out_type</span><span class="p">))}</span> |
| </pre></div> |
| </div> |
| </div> |
| <div class="section" id="bind-with-data-and-evaluate"> |
| <span id="bind-with-data-and-evaluate"></span><h3>Bind with Data and Evaluate<a class="headerlink" href="#bind-with-data-and-evaluate" title="Permalink to this headline">¶</a></h3> |
| <p>The symbol <code class="docutils literal"><span class="pre">c</span></code> constructed above declares what computation should be run. To |
| evaluate it, we first need to feed the arguments, namely free variables, with data.</p> |
| <p>We can do it by using the <code class="docutils literal"><span class="pre">bind</span></code> method, which accepts device context and |
| a <code class="docutils literal"><span class="pre">dict</span></code> mapping free variable names to <code class="docutils literal"><span class="pre">NDArray</span></code>s as arguments and returns an |
| executor. The executor provides <code class="docutils literal"><span class="pre">forward</span></code> method for evaluation and an attribute |
| <code class="docutils literal"><span class="pre">outputs</span></code> to get all the results.</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">ex</span> <span class="o">=</span> <span class="n">c</span><span class="o">.</span><span class="n">bind</span><span class="p">(</span><span class="n">ctx</span><span class="o">=</span><span class="n">mx</span><span class="o">.</span><span class="n">cpu</span><span class="p">(),</span> <span class="n">args</span><span class="o">=</span><span class="p">{</span><span class="s1">'a'</span> <span class="p">:</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">([</span><span class="mi">2</span><span class="p">,</span><span class="mi">3</span><span class="p">]),</span> |
| <span class="s1">'b'</span> <span class="p">:</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">([</span><span class="mi">2</span><span class="p">,</span><span class="mi">3</span><span class="p">])})</span> |
| <span class="n">ex</span><span class="o">.</span><span class="n">forward</span><span class="p">()</span> |
| <span class="k">print</span><span class="p">(</span><span class="s1">'number of outputs = </span><span class="si">%d</span><span class="se">\n</span><span class="s1">the first output = </span><span class="se">\n</span><span class="si">%s</span><span class="s1">'</span> <span class="o">%</span> <span class="p">(</span> |
| <span class="nb">len</span><span class="p">(</span><span class="n">ex</span><span class="o">.</span><span class="n">outputs</span><span class="p">),</span> <span class="n">ex</span><span class="o">.</span><span class="n">outputs</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">.</span><span class="n">asnumpy</span><span class="p">()))</span> |
| </pre></div> |
| </div> |
| <p>We can evaluate the same symbol on GPU with different data.</p> |
| <p><strong>Note</strong> In order to execute the following section on a cpu set gpu_device to mx.cpu().</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">gpu_device</span><span class="o">=</span><span class="n">mx</span><span class="o">.</span><span class="n">gpu</span><span class="p">()</span> <span class="c1"># Change this to mx.cpu() in absence of GPUs.</span> |
| |
| <span class="n">ex_gpu</span> <span class="o">=</span> <span class="n">c</span><span class="o">.</span><span class="n">bind</span><span class="p">(</span><span class="n">ctx</span><span class="o">=</span><span class="n">gpu_device</span><span class="p">,</span> <span class="n">args</span><span class="o">=</span><span class="p">{</span><span class="s1">'a'</span> <span class="p">:</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">([</span><span class="mi">3</span><span class="p">,</span><span class="mi">4</span><span class="p">],</span> <span class="n">gpu_device</span><span class="p">)</span><span class="o">*</span><span class="mi">2</span><span class="p">,</span> |
| <span class="s1">'b'</span> <span class="p">:</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">([</span><span class="mi">3</span><span class="p">,</span><span class="mi">4</span><span class="p">],</span> <span class="n">gpu_device</span><span class="p">)</span><span class="o">*</span><span class="mi">3</span><span class="p">})</span> |
| <span class="n">ex_gpu</span><span class="o">.</span><span class="n">forward</span><span class="p">()</span> |
| <span class="n">ex_gpu</span><span class="o">.</span><span class="n">outputs</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">.</span><span class="n">asnumpy</span><span class="p">()</span> |
| </pre></div> |
| </div> |
| <p>We can also use <code class="docutils literal"><span class="pre">eval</span></code> method to evaluate the symbol. It combines calls to <code class="docutils literal"><span class="pre">bind</span></code> |
| and <code class="docutils literal"><span class="pre">forward</span></code> methods.</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">ex</span> <span class="o">=</span> <span class="n">c</span><span class="o">.</span><span class="n">eval</span><span class="p">(</span><span class="n">ctx</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">cpu</span><span class="p">(),</span> <span class="n">a</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">([</span><span class="mi">2</span><span class="p">,</span><span class="mi">3</span><span class="p">]),</span> <span class="n">b</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">([</span><span class="mi">2</span><span class="p">,</span><span class="mi">3</span><span class="p">]))</span> |
| <span class="k">print</span><span class="p">(</span><span class="s1">'number of outputs = </span><span class="si">%d</span><span class="se">\n</span><span class="s1">the first output = </span><span class="se">\n</span><span class="si">%s</span><span class="s1">'</span> <span class="o">%</span> <span class="p">(</span> |
| <span class="nb">len</span><span class="p">(</span><span class="n">ex</span><span class="p">),</span> <span class="n">ex</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">.</span><span class="n">asnumpy</span><span class="p">()))</span> |
| </pre></div> |
| </div> |
| <p>For neural nets, a more commonly used pattern is <code class="docutils literal"><span class="pre">simple_bind</span></code>, which |
| creates all of the argument arrays for you. Then you can call <code class="docutils literal"><span class="pre">forward</span></code>, |
| and <code class="docutils literal"><span class="pre">backward</span></code> (if the gradient is needed) to get the gradient.</p> |
| </div> |
| <div class="section" id="load-and-save"> |
| <span id="load-and-save"></span><h3>Load and Save<a class="headerlink" href="#load-and-save" title="Permalink to this headline">¶</a></h3> |
| <p>Logically symbols correspond to ndarrays. They both represent a tensor. They both |
| are inputs/outputs of operators. We can either serialize a <code class="docutils literal"><span class="pre">Symbol</span></code> object by |
| using <code class="docutils literal"><span class="pre">pickle</span></code>, or by using <code class="docutils literal"><span class="pre">save</span></code> and <code class="docutils literal"><span class="pre">load</span></code> methods directly as we discussed in |
| <a class="reference external" href="/versions/1.2.1/tutorials/basic/ndarray.html#serialize-from-to-distributed-filesystems">NDArray tutorial</a>.</p> |
| <p>When serializing <code class="docutils literal"><span class="pre">NDArray</span></code>, we serialize the tensor data in it and directly dump to |
| disk in binary format. |
| But symbol uses a concept of graph. Graphs are composed by chaining operators. They are |
| implicitly represented by output symbols. So, when serializing a <code class="docutils literal"><span class="pre">Symbol</span></code>, we |
| serialize the graph of which the symbol is an output. While serialization, Symbol |
| uses more readable <code class="docutils literal"><span class="pre">json</span></code> format for serialization. To convert symbol to <code class="docutils literal"><span class="pre">json</span></code> |
| string, use <code class="docutils literal"><span class="pre">tojson</span></code> method.</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="k">print</span><span class="p">(</span><span class="n">c</span><span class="o">.</span><span class="n">tojson</span><span class="p">())</span> |
| <span class="n">c</span><span class="o">.</span><span class="n">save</span><span class="p">(</span><span class="s1">'symbol-c.json'</span><span class="p">)</span> |
| <span class="n">c2</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">load</span><span class="p">(</span><span class="s1">'symbol-c.json'</span><span class="p">)</span> |
| <span class="n">c</span><span class="o">.</span><span class="n">tojson</span><span class="p">()</span> <span class="o">==</span> <span class="n">c2</span><span class="o">.</span><span class="n">tojson</span><span class="p">()</span> |
| </pre></div> |
| </div> |
| </div> |
| </div> |
| <div class="section" id="customized-symbol"> |
| <span id="customized-symbol"></span><h2>Customized Symbol<a class="headerlink" href="#customized-symbol" title="Permalink to this headline">¶</a></h2> |
| <p>Most operators such as <code class="docutils literal"><span class="pre">mx.sym.Convolution</span></code> and <code class="docutils literal"><span class="pre">mx.sym.Reshape</span></code> are implemented |
| in C++ for better performance. MXNet also allows users to write new operators |
| using any front-end language such as Python. It often makes the developing and |
| debugging much easier. To implement an operator in Python, refer to |
| <a class="reference external" href="/versions/1.2.1/faq/new_op.html">How to create new operators</a>.</p> |
| </div> |
| <div class="section" id="advanced-usages"> |
| <span id="advanced-usages"></span><h2>Advanced Usages<a class="headerlink" href="#advanced-usages" title="Permalink to this headline">¶</a></h2> |
| <div class="section" id="type-cast"> |
| <span id="type-cast"></span><h3>Type Cast<a class="headerlink" href="#type-cast" title="Permalink to this headline">¶</a></h3> |
| <p>By default, MXNet uses 32-bit floats. |
| But for better accuracy-performance, |
| we can also use a lower precision data type. |
| For example, The Nvidia Tesla Pascal GPUs |
| (e.g. P100) have improved 16-bit float performance, |
| while GTX Pascal GPUs (e.g. GTX 1080) are fast on 8-bit integers.</p> |
| <p>To convert the data type as per the requirements, |
| we can use <code class="docutils literal"><span class="pre">mx.sym.cast</span></code> operator as follows:</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">a</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="s1">'data'</span><span class="p">)</span> |
| <span class="n">b</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">cast</span><span class="p">(</span><span class="n">data</span><span class="o">=</span><span class="n">a</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="s1">'float16'</span><span class="p">)</span> |
| <span class="n">arg</span><span class="p">,</span> <span class="n">out</span><span class="p">,</span> <span class="n">_</span> <span class="o">=</span> <span class="n">b</span><span class="o">.</span><span class="n">infer_type</span><span class="p">(</span><span class="n">data</span><span class="o">=</span><span class="s1">'float32'</span><span class="p">)</span> |
| <span class="k">print</span><span class="p">({</span><span class="s1">'input'</span><span class="p">:</span><span class="n">arg</span><span class="p">,</span> <span class="s1">'output'</span><span class="p">:</span><span class="n">out</span><span class="p">})</span> |
| |
| <span class="n">c</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">cast</span><span class="p">(</span><span class="n">data</span><span class="o">=</span><span class="n">a</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="s1">'uint8'</span><span class="p">)</span> |
| <span class="n">arg</span><span class="p">,</span> <span class="n">out</span><span class="p">,</span> <span class="n">_</span> <span class="o">=</span> <span class="n">c</span><span class="o">.</span><span class="n">infer_type</span><span class="p">(</span><span class="n">data</span><span class="o">=</span><span class="s1">'int32'</span><span class="p">)</span> |
| <span class="k">print</span><span class="p">({</span><span class="s1">'input'</span><span class="p">:</span><span class="n">arg</span><span class="p">,</span> <span class="s1">'output'</span><span class="p">:</span><span class="n">out</span><span class="p">})</span> |
| </pre></div> |
| </div> |
| </div> |
| <div class="section" id="variable-sharing"> |
| <span id="variable-sharing"></span><h3>Variable Sharing<a class="headerlink" href="#variable-sharing" title="Permalink to this headline">¶</a></h3> |
| <p>To share the contents between several symbols, |
| we can bind these symbols with the same array as follows:</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">a</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="s1">'a'</span><span class="p">)</span> |
| <span class="n">b</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="s1">'b'</span><span class="p">)</span> |
| <span class="n">b</span> <span class="o">=</span> <span class="n">a</span> <span class="o">+</span> <span class="n">a</span> <span class="o">*</span> <span class="n">a</span> |
| |
| <span class="n">data</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">((</span><span class="mi">2</span><span class="p">,</span><span class="mi">3</span><span class="p">))</span><span class="o">*</span><span class="mi">2</span> |
| <span class="n">ex</span> <span class="o">=</span> <span class="n">b</span><span class="o">.</span><span class="n">bind</span><span class="p">(</span><span class="n">ctx</span><span class="o">=</span><span class="n">mx</span><span class="o">.</span><span class="n">cpu</span><span class="p">(),</span> <span class="n">args</span><span class="o">=</span><span class="p">{</span><span class="s1">'a'</span><span class="p">:</span><span class="n">data</span><span class="p">,</span> <span class="s1">'b'</span><span class="p">:</span><span class="n">data</span><span class="p">})</span> |
| <span class="n">ex</span><span class="o">.</span><span class="n">forward</span><span class="p">()</span> |
| <span class="n">ex</span><span class="o">.</span><span class="n">outputs</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">.</span><span class="n">asnumpy</span><span class="p">()</span> |
| </pre></div> |
| </div> |
| <div class="btn-group" role="group"> |
| <div class="download-btn"><a download="symbol.ipynb" href="symbol.ipynb"><span class="glyphicon glyphicon-download-alt"></span> symbol.ipynb</a></div></div></div> |
| </div> |
| </div> |
| </div> |
| </div> |
| <div aria-label="main navigation" class="sphinxsidebar rightsidebar" role="navigation"> |
| <div class="sphinxsidebarwrapper"> |
| <h3><a href="../../index.html">Table Of Contents</a></h3> |
| <ul> |
| <li><a class="reference internal" href="#">Symbol - Neural network graphs and auto-differentiation</a><ul> |
| <li><a class="reference internal" href="#prerequisites">Prerequisites</a></li> |
| <li><a class="reference internal" href="#basic-symbol-composition">Basic Symbol Composition</a><ul> |
| <li><a class="reference internal" href="#basic-operators">Basic Operators</a></li> |
| <li><a class="reference internal" href="#basic-neural-networks">Basic Neural Networks</a></li> |
| </ul> |
| </li> |
| <li><a class="reference internal" href="#more-complicated-composition">More Complicated Composition</a><ul> |
| <li><a class="reference internal" href="#modularized-construction-for-deep-networks">Modularized Construction for Deep Networks</a></li> |
| <li><a class="reference internal" href="#group-multiple-symbols">Group Multiple Symbols</a></li> |
| </ul> |
| </li> |
| <li><a class="reference internal" href="#relations-to-ndarray">Relations to NDArray</a></li> |
| <li><a class="reference internal" href="#symbol-manipulation">Symbol Manipulation</a><ul> |
| <li><a class="reference internal" href="#shape-and-type-inference">Shape and Type Inference</a></li> |
| <li><a class="reference internal" href="#bind-with-data-and-evaluate">Bind with Data and Evaluate</a></li> |
| <li><a class="reference internal" href="#load-and-save">Load and Save</a></li> |
| </ul> |
| </li> |
| <li><a class="reference internal" href="#customized-symbol">Customized Symbol</a></li> |
| <li><a class="reference internal" href="#advanced-usages">Advanced Usages</a><ul> |
| <li><a class="reference internal" href="#type-cast">Type Cast</a></li> |
| <li><a class="reference internal" href="#variable-sharing">Variable Sharing</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| </ul> |
| </div> |
| </div> |
| </div><div class="footer"> |
| <div class="section-disclaimer"> |
| <div class="container"> |
| <div> |
| <img height="60" src="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/image/apache_incubator_logo.png"/> |
| <p> |
| Apache MXNet is an effort undergoing incubation at The Apache Software Foundation (ASF), <strong>sponsored by the <i>Apache Incubator</i></strong>. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF. |
| </p> |
| <p> |
| "Copyright © 2017-2018, The Apache Software Foundation |
| Apache MXNet, MXNet, Apache, the Apache feather, and the Apache MXNet project logo are either registered trademarks or trademarks of the Apache Software Foundation." |
| </p> |
| </div> |
| </div> |
| </div> |
| </div> <!-- pagename != index --> |
| </div> |
| <script crossorigin="anonymous" integrity="sha384-0mSbJDEHialfmuBBQP6A4Qrprq5OVfW37PRR3j5ELqxss1yVqOtnepnHVP9aJ7xS" src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/js/bootstrap.min.js"></script> |
| <script src="../../_static/js/sidebar.js" type="text/javascript"></script> |
| <script src="../../_static/js/search.js" type="text/javascript"></script> |
| <script src="../../_static/js/navbar.js" type="text/javascript"></script> |
| <script src="../../_static/js/clipboard.min.js" type="text/javascript"></script> |
| <script src="../../_static/js/copycode.js" type="text/javascript"></script> |
| <script src="../../_static/js/page.js" type="text/javascript"></script> |
| <script src="../../_static/js/docversion.js" type="text/javascript"></script> |
| <script type="text/javascript"> |
| $('body').ready(function () { |
| $('body').css('visibility', 'visible'); |
| }); |
| </script> |
| </body> |
| </html> |