blob: 001adcbf79094298c368face7c00aa9c5e45be73 [file] [log] [blame]
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8"/>
<meta content="IE=edge" http-equiv="X-UA-Compatible"/>
<meta content="width=device-width, initial-scale=1" name="viewport"/>
<title>Train a Linear Regression Model with Sparse Symbols — mxnet documentation</title>
<link crossorigin="anonymous" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css" integrity="sha384-1q8mTJOASx8j1Au+a5WDVnPi2lkFfwwEAa8hDDdjZlpLegxhjVME1fgjWPGmkzs7" rel="stylesheet"/>
<link href="https://maxcdn.bootstrapcdn.com/font-awesome/4.5.0/css/font-awesome.min.css" rel="stylesheet"/>
<link href="../../_static/basic.css" rel="stylesheet" type="text/css"/>
<link href="../../_static/pygments.css" rel="stylesheet" type="text/css"/>
<link href="../../_static/mxnet.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript">
var DOCUMENTATION_OPTIONS = {
URL_ROOT: '../../',
VERSION: '',
COLLAPSE_INDEX: false,
FILE_SUFFIX: '.html',
HAS_SOURCE: true,
SOURCELINK_SUFFIX: ''
};
</script>
<script src="../../_static/jquery-1.11.1.js" type="text/javascript"></script>
<script src="../../_static/underscore.js" type="text/javascript"></script>
<script src="../../_static/searchtools_custom.js" type="text/javascript"></script>
<script src="../../_static/doctools.js" type="text/javascript"></script>
<script src="../../_static/selectlang.js" type="text/javascript"></script>
<script src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"></script>
<script type="text/javascript"> jQuery(function() { Search.loadIndex("/searchindex.js"); Search.init();}); </script>
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new
Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','https://www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-96378503-1', 'auto');
ga('send', 'pageview');
</script>
<!-- -->
<!-- <script type="text/javascript" src="../../_static/jquery.js"></script> -->
<!-- -->
<!-- <script type="text/javascript" src="../../_static/underscore.js"></script> -->
<!-- -->
<!-- <script type="text/javascript" src="../../_static/doctools.js"></script> -->
<!-- -->
<!-- <script type="text/javascript" src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script> -->
<!-- -->
<link href="../index.html" rel="up" title="Tutorials">
<link href="../../community/index.html" rel="next" title="MXNet Community"/>
<link href="row_sparse.html" rel="prev" title="RowSparseNDArray - NDArray for Sparse Gradient Updates"/>
<link href="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/image/mxnet-icon.png" rel="icon" type="image/png">
</link></link></head>
<body background="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/image/mxnet-background-compressed.jpeg" role="document">
<div class="content-block"><div class="navbar navbar-fixed-top">
<div class="container" id="navContainer">
<div class="innder" id="header-inner">
<h1 id="logo-wrap">
<a href="../../" id="logo"><img src="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/image/mxnet_logo.png"/></a>
</h1>
<nav class="nav-bar" id="main-nav">
<a class="main-nav-link" href="../../install/index.html">Install</a>
<a class="main-nav-link" href="../../tutorials/index.html">Tutorials</a>
<span id="dropdown-menu-position-anchor">
<a aria-expanded="true" aria-haspopup="true" class="main-nav-link dropdown-toggle" data-toggle="dropdown" href="#" role="button">Gluon <span class="caret"></span></a>
<ul class="dropdown-menu navbar-menu" id="package-dropdown-menu">
<li><a class="main-nav-link" href="../../gluon/index.html">About</a></li>
<li><a class="main-nav-link" href="http://gluon.mxnet.io">Tutorials</a></li>
</ul>
</span>
<span id="dropdown-menu-position-anchor">
<a aria-expanded="true" aria-haspopup="true" class="main-nav-link dropdown-toggle" data-toggle="dropdown" href="#" role="button">API <span class="caret"></span></a>
<ul class="dropdown-menu navbar-menu" id="package-dropdown-menu">
<li><a class="main-nav-link" href="../../api/python/index.html">Python</a></li>
<li><a class="main-nav-link" href="../../api/scala/index.html">Scala</a></li>
<li><a class="main-nav-link" href="../../api/r/index.html">R</a></li>
<li><a class="main-nav-link" href="../../api/julia/index.html">Julia</a></li>
<li><a class="main-nav-link" href="../../api/c++/index.html">C++</a></li>
<li><a class="main-nav-link" href="../../api/perl/index.html">Perl</a></li>
</ul>
</span>
<span id="dropdown-menu-position-anchor-docs">
<a aria-expanded="true" aria-haspopup="true" class="main-nav-link dropdown-toggle" data-toggle="dropdown" href="#" role="button">Docs <span class="caret"></span></a>
<ul class="dropdown-menu navbar-menu" id="package-dropdown-menu-docs">
<li><a class="main-nav-link" href="../../faq/index.html">FAQ</a></li>
<li><a class="main-nav-link" href="../../architecture/index.html">Architecture</a></li>
<li><a class="main-nav-link" href="https://github.com/apache/incubator-mxnet/tree/0.12.0/example">Examples</a></li>
<li><a class="main-nav-link" href="../../model_zoo/index.html">Model Zoo</a></li>
</ul>
</span>
<a class="main-nav-link" href="https://github.com/dmlc/mxnet">Github</a>
<span id="dropdown-menu-position-anchor-community">
<a aria-expanded="true" aria-haspopup="true" class="main-nav-link dropdown-toggle" data-toggle="dropdown" href="#" role="button">Community <span class="caret"></span></a>
<ul class="dropdown-menu navbar-menu" id="package-dropdown-menu-community">
<li><a class="main-nav-link" href="../../community/index.html">Community</a></li>
<li><a class="main-nav-link" href="../../community/contribute.html">Contribute</a></li>
<li><a class="main-nav-link" href="../../community/powered_by.html">Powered By</a></li>
</ul>
</span>
<a class="main-nav-link" href="http://discuss.mxnet.io">Discuss</a>
<span id="dropdown-menu-position-anchor-version" style="position: relative"><a href="#" class="main-nav-link dropdown-toggle" data-toggle="dropdown" role="button" aria-haspopup="true" aria-expanded="true">Versions(0.12.0)<span class="caret"></span></a><ul id="package-dropdown-menu" class="dropdown-menu"><li><a class="main-nav-link" href=https://mxnet.incubator.apache.org/>1.0.0</a></li><li><a class="main-nav-link" href=https://mxnet.incubator.apache.org/versions/0.12.1/index.html>0.12.1</a></li><li><a class="main-nav-link" href=https://mxnet.incubator.apache.org/versions/0.12.0/index.html>0.12.0</a></li><li><a class="main-nav-link" href=https://mxnet.incubator.apache.org/versions/0.11.0/index.html>0.11.0</a></li><li><a class="main-nav-link" href=https://mxnet.incubator.apache.org/versions/master/index.html>master</a></li></ul></span></nav>
<script> function getRootPath(){ return "../../" } </script>
<div class="burgerIcon dropdown">
<a class="dropdown-toggle" data-toggle="dropdown" href="#" role="button"></a>
<ul class="dropdown-menu" id="burgerMenu">
<li><a href="../../install/index.html">Install</a></li>
<li><a class="main-nav-link" href="../../tutorials/index.html">Tutorials</a></li>
<li class="dropdown-submenu">
<a href="#" tabindex="-1">Community</a>
<ul class="dropdown-menu">
<li><a href="../../community/index.html" tabindex="-1">Community</a></li>
<li><a href="../../community/contribute.html" tabindex="-1">Contribute</a></li>
<li><a href="../../community/powered_by.html" tabindex="-1">Powered By</a></li>
</ul>
</li>
<li class="dropdown-submenu">
<a href="#" tabindex="-1">API</a>
<ul class="dropdown-menu">
<li><a href="../../api/python/index.html" tabindex="-1">Python</a>
</li>
<li><a href="../../api/scala/index.html" tabindex="-1">Scala</a>
</li>
<li><a href="../../api/r/index.html" tabindex="-1">R</a>
</li>
<li><a href="../../api/julia/index.html" tabindex="-1">Julia</a>
</li>
<li><a href="../../api/c++/index.html" tabindex="-1">C++</a>
</li>
<li><a href="../../api/perl/index.html" tabindex="-1">Perl</a>
</li>
</ul>
</li>
<li class="dropdown-submenu">
<a href="#" tabindex="-1">Docs</a>
<ul class="dropdown-menu">
<li><a href="../../tutorials/index.html" tabindex="-1">Tutorials</a></li>
<li><a href="../../faq/index.html" tabindex="-1">FAQ</a></li>
<li><a href="../../architecture/index.html" tabindex="-1">Architecture</a></li>
<li><a href="https://github.com/apache/incubator-mxnet/tree/0.12.0/example" tabindex="-1">Examples</a></li>
<li><a href="../../model_zoo/index.html" tabindex="-1">Model Zoo</a></li>
</ul>
</li>
<li><a href="../../architecture/index.html">Architecture</a></li>
<li><a class="main-nav-link" href="https://github.com/dmlc/mxnet">Github</a></li>
<li id="dropdown-menu-position-anchor-version-mobile" class="dropdown-submenu" style="position: relative"><a href="#" tabindex="-1">Versions(0.12.0)</a><ul class="dropdown-menu"><li><a tabindex="-1" href=https://mxnet.incubator.apache.org/>1.0.0</a></li><li><a tabindex="-1" href=https://mxnet.incubator.apache.org/versions/0.12.1/index.html>0.12.1</a></li><li><a tabindex="-1" href=https://mxnet.incubator.apache.org/versions/0.12.0/index.html>0.12.0</a></li><li><a tabindex="-1" href=https://mxnet.incubator.apache.org/versions/0.11.0/index.html>0.11.0</a></li><li><a tabindex="-1" href=https://mxnet.incubator.apache.org/versions/master/index.html>master</a></li></ul></li></ul>
</div>
<div class="plusIcon dropdown">
<a class="dropdown-toggle" data-toggle="dropdown" href="#" role="button"><span aria-hidden="true" class="glyphicon glyphicon-plus"></span></a>
<ul class="dropdown-menu dropdown-menu-right" id="plusMenu"></ul>
</div>
<div id="search-input-wrap">
<form action="../../search.html" autocomplete="off" class="" method="get" role="search">
<div class="form-group inner-addon left-addon">
<i class="glyphicon glyphicon-search"></i>
<input class="form-control" name="q" placeholder="Search" type="text"/>
</div>
<input name="check_keywords" type="hidden" value="yes"/>
<input name="area" type="hidden" value="default"/>
</form>
<div id="search-preview"></div>
</div>
<div id="searchIcon">
<span aria-hidden="true" class="glyphicon glyphicon-search"></span>
</div>
<!-- <div id="lang-select-wrap"> -->
<!-- <label id="lang-select-label"> -->
<!-- <\!-- <i class="fa fa-globe"></i> -\-> -->
<!-- <span></span> -->
<!-- </label> -->
<!-- <select id="lang-select"> -->
<!-- <option value="en">Eng</option> -->
<!-- <option value="zh">中文</option> -->
<!-- </select> -->
<!-- </div> -->
<!-- <a id="mobile-nav-toggle">
<span class="mobile-nav-toggle-bar"></span>
<span class="mobile-nav-toggle-bar"></span>
<span class="mobile-nav-toggle-bar"></span>
</a> -->
</div>
</div>
</div>
<script type="text/javascript">
$('body').css('background', 'white');
</script>
<div class="container">
<div class="row">
<div aria-label="main navigation" class="sphinxsidebar leftsidebar" role="navigation">
<div class="sphinxsidebarwrapper">
<ul class="current">
<li class="toctree-l1"><a class="reference internal" href="../../api/python/index.html">Python Documents</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../api/r/index.html">R Documents</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../api/julia/index.html">Julia Documents</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../api/c++/index.html">C++ Documents</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../api/scala/index.html">Scala Documents</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../api/perl/index.html">Perl Documents</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../faq/index.html">HowTo Documents</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../architecture/index.html">System Documents</a></li>
<li class="toctree-l1 current"><a class="reference internal" href="../index.html">Tutorials</a><ul class="current">
<li class="toctree-l2 current"><a class="reference internal" href="../index.html#python">Python</a><ul class="current">
<li class="toctree-l3"><a class="reference internal" href="../index.html#basic">Basic</a></li>
<li class="toctree-l3"><a class="reference internal" href="../index.html#training-and-inference">Training and Inference</a></li>
<li class="toctree-l3 current"><a class="reference internal" href="../index.html#sparse-ndarray">Sparse NDArray</a><ul class="current">
<li class="toctree-l4"><a class="reference internal" href="csr.html">CSRNDArray - NDArray in Compressed Sparse Row Storage Format</a></li>
<li class="toctree-l4"><a class="reference internal" href="row_sparse.html">RowSparseNDArray - NDArray for Sparse Gradient Updates</a></li>
<li class="toctree-l4 current"><a class="current reference internal" href="">Train a Linear Regression Model with Sparse Symbols</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../index.html#contributing-tutorials">Contributing Tutorials</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../community/index.html">Community</a></li>
</ul>
</div>
</div>
<div class="content">
<div class="page-tracker"></div>
<div class="section" id="train-a-linear-regression-model-with-sparse-symbols">
<span id="train-a-linear-regression-model-with-sparse-symbols"></span><h1>Train a Linear Regression Model with Sparse Symbols<a class="headerlink" href="#train-a-linear-regression-model-with-sparse-symbols" title="Permalink to this headline"></a></h1>
<p>In previous tutorials, we introduced <code class="docutils literal"><span class="pre">CSRNDArray</span></code> and <code class="docutils literal"><span class="pre">RowSparseNDArray</span></code>,
the basic data structures for manipulating sparse data.
MXNet also provides <code class="docutils literal"><span class="pre">Sparse</span> <span class="pre">Symbol</span></code> API, which enables symbolic expressions that handle sparse arrays.
In this tutorial, we first focus on how to compose a symbolic graph with sparse operators,
then train a linear regression model using sparse symbols with the Module API.</p>
<div class="section" id="prerequisites">
<span id="prerequisites"></span><h2>Prerequisites<a class="headerlink" href="#prerequisites" title="Permalink to this headline"></a></h2>
<p>To complete this tutorial, we need:</p>
<ul class="simple">
<li>MXNet. See the instructions for your operating system in <a class="reference external" href="https://mxnet.incubator.apache.org/get_started/install.html">Setup and Installation</a>.</li>
<li><a class="reference external" href="http://jupyter.org/index.html">Jupyter Notebook</a> and <a class="reference external" href="http://docs.python-requests.org/en/master/">Python Requests</a> packages.</li>
</ul>
<div class="highlight-python"><div class="highlight"><pre><span></span>pip install jupyter requests
</pre></div>
</div>
<ul class="simple">
<li>Basic knowledge of Symbol in MXNet. See the detailed tutorial for Symbol in <a class="reference external" href="https://mxnet.incubator.apache.org/tutorials/basic/symbol.html">Symbol - Neural Network Graphs and Auto-differentiation</a>.</li>
<li>Basic knowledge of CSRNDArray in MXNet. See the detailed tutorial for CSRNDArray in <a class="reference external" href="https://mxnet.incubator.apache.org/versions/master/tutorials/sparse/csr.html">CSRNDArray - NDArray in Compressed Sparse Row Storage Format</a>.</li>
<li>Basic knowledge of RowSparseNDArray in MXNet. See the detailed tutorial for RowSparseNDArray in <a class="reference external" href="https://mxnet.incubator.apache.org/versions/master/tutorials/sparse/row_sparse.html">RowSparseNDArray - NDArray for Sparse Gradient Updates</a>.</li>
</ul>
</div>
<div class="section" id="variables">
<span id="variables"></span><h2>Variables<a class="headerlink" href="#variables" title="Permalink to this headline"></a></h2>
<p>Variables are placeholder for arrays. We can use them to hold sparse arrays too.</p>
<div class="section" id="variable-storage-types">
<span id="variable-storage-types"></span><h3>Variable Storage Types<a class="headerlink" href="#variable-storage-types" title="Permalink to this headline"></a></h3>
<p>The <code class="docutils literal"><span class="pre">stype</span></code> attribute of a variable is used to indicate the storage type of the array.
By default, the <code class="docutils literal"><span class="pre">stype</span></code> of a variable is “default” which indicates the default dense storage format.
We can specify the <code class="docutils literal"><span class="pre">stype</span></code> of a variable as “csr” or “row_sparse” to hold sparse arrays.</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">mxnet</span> <span class="kn">as</span> <span class="nn">mx</span>
<span class="c1"># Create a variable to hold an NDArray</span>
<span class="n">a</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="s1">'a'</span><span class="p">)</span>
<span class="c1"># Create a variable to hold a CSRNDArray</span>
<span class="n">b</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="s1">'b'</span><span class="p">,</span> <span class="n">stype</span><span class="o">=</span><span class="s1">'csr'</span><span class="p">)</span>
<span class="c1"># Create a variable to hold a RowSparseNDArray</span>
<span class="n">c</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="s1">'c'</span><span class="p">,</span> <span class="n">stype</span><span class="o">=</span><span class="s1">'row_sparse'</span><span class="p">)</span>
<span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span>
</pre></div>
</div>
</div>
<div class="section" id="bind-with-sparse-arrays">
<span id="bind-with-sparse-arrays"></span><h3>Bind with Sparse Arrays<a class="headerlink" href="#bind-with-sparse-arrays" title="Permalink to this headline"></a></h3>
<p>The sparse symbols constructed above declare storage types of the arrays to hold.
To evaluate them, we need to feed the free variables with sparse data.</p>
<p>You can instantiate an executor from a sparse symbol by using the <code class="docutils literal"><span class="pre">simple_bind</span></code> method,
which allocate zeros to all free variables according to their storage types.
The executor provides <code class="docutils literal"><span class="pre">forward</span></code> method for evaluation and an attribute
<code class="docutils literal"><span class="pre">outputs</span></code> to get all the results. Later, we will show the use of the <code class="docutils literal"><span class="pre">backward</span></code> method and other methods computing the gradients and updating parameters. A simple example first:</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">shape</span> <span class="o">=</span> <span class="p">(</span><span class="mi">2</span><span class="p">,</span><span class="mi">2</span><span class="p">)</span>
<span class="c1"># Instantiate an executor from sparse symbols</span>
<span class="n">b_exec</span> <span class="o">=</span> <span class="n">b</span><span class="o">.</span><span class="n">simple_bind</span><span class="p">(</span><span class="n">ctx</span><span class="o">=</span><span class="n">mx</span><span class="o">.</span><span class="n">cpu</span><span class="p">(),</span> <span class="n">b</span><span class="o">=</span><span class="n">shape</span><span class="p">)</span>
<span class="n">c_exec</span> <span class="o">=</span> <span class="n">c</span><span class="o">.</span><span class="n">simple_bind</span><span class="p">(</span><span class="n">ctx</span><span class="o">=</span><span class="n">mx</span><span class="o">.</span><span class="n">cpu</span><span class="p">(),</span> <span class="n">c</span><span class="o">=</span><span class="n">shape</span><span class="p">)</span>
<span class="n">b_exec</span><span class="o">.</span><span class="n">forward</span><span class="p">()</span>
<span class="n">c_exec</span><span class="o">.</span><span class="n">forward</span><span class="p">()</span>
<span class="c1"># Sparse arrays of zeros are bound to b and c</span>
<span class="k">print</span><span class="p">(</span><span class="n">b_exec</span><span class="o">.</span><span class="n">outputs</span><span class="p">,</span> <span class="n">c_exec</span><span class="o">.</span><span class="n">outputs</span><span class="p">)</span>
</pre></div>
</div>
<p>You can update the array held by the variable by accessing executor’s <code class="docutils literal"><span class="pre">arg_dict</span></code> and assigning new values.</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">b_exec</span><span class="o">.</span><span class="n">arg_dict</span><span class="p">[</span><span class="s1">'b'</span><span class="p">][:]</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">(</span><span class="n">shape</span><span class="p">)</span><span class="o">.</span><span class="n">tostype</span><span class="p">(</span><span class="s1">'csr'</span><span class="p">)</span>
<span class="n">b_exec</span><span class="o">.</span><span class="n">forward</span><span class="p">()</span>
<span class="c1"># The array `b` holds are updated to be ones</span>
<span class="n">eval_b</span> <span class="o">=</span> <span class="n">b_exec</span><span class="o">.</span><span class="n">outputs</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span>
<span class="p">{</span><span class="s1">'eval_b'</span><span class="p">:</span> <span class="n">eval_b</span><span class="p">,</span> <span class="s1">'eval_b.asnumpy()'</span><span class="p">:</span> <span class="n">eval_b</span><span class="o">.</span><span class="n">asnumpy</span><span class="p">()}</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="symbol-composition-and-storage-type-inference">
<span id="symbol-composition-and-storage-type-inference"></span><h2>Symbol Composition and Storage Type Inference<a class="headerlink" href="#symbol-composition-and-storage-type-inference" title="Permalink to this headline"></a></h2>
<div class="section" id="basic-symbol-composition">
<span id="basic-symbol-composition"></span><h3>Basic Symbol Composition<a class="headerlink" href="#basic-symbol-composition" title="Permalink to this headline"></a></h3>
<p>The following example builds a simple element-wise addition expression with different storage types.
The sparse symbols are available in the <code class="docutils literal"><span class="pre">mx.sym.sparse</span></code> package.</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="c1"># Element-wise addition of variables with "default" stype</span>
<span class="n">d</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">elemwise_add</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">a</span><span class="p">)</span>
<span class="c1"># Element-wise addition of variables with "csr" stype</span>
<span class="n">e</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">sparse</span><span class="o">.</span><span class="n">negative</span><span class="p">(</span><span class="n">b</span><span class="p">)</span>
<span class="c1"># Element-wise addition of variables with "row_sparse" stype</span>
<span class="n">f</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">sparse</span><span class="o">.</span><span class="n">elemwise_add</span><span class="p">(</span><span class="n">c</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span>
<span class="p">{</span><span class="s1">'d'</span><span class="p">:</span><span class="n">d</span><span class="p">,</span> <span class="s1">'e'</span><span class="p">:</span><span class="n">e</span><span class="p">,</span> <span class="s1">'f'</span><span class="p">:</span><span class="n">f</span><span class="p">}</span>
</pre></div>
</div>
</div>
<div class="section" id="storage-type-inference">
<span id="storage-type-inference"></span><h3>Storage Type Inference<a class="headerlink" href="#storage-type-inference" title="Permalink to this headline"></a></h3>
<p>What will be the output storage types of sparse symbols? In MXNet, for any sparse symbol, the result storage types are inferred based on storage types of inputs.
You can read the <a class="reference external" href="https://mxnet.incubator.apache.org/versions/master/api/python/symbol/sparse.html">Sparse Symbol API</a> documentation to find what output storage types are. In the example below we will try out the storage types introduced in the Row Sparse and Compressed Sparse Row tutorials: <code class="docutils literal"><span class="pre">default</span></code> (dense), <code class="docutils literal"><span class="pre">csr</span></code>, and <code class="docutils literal"><span class="pre">row_sparse</span></code>.</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">add_exec</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">Group</span><span class="p">([</span><span class="n">d</span><span class="p">,</span> <span class="n">e</span><span class="p">,</span> <span class="n">f</span><span class="p">])</span><span class="o">.</span><span class="n">simple_bind</span><span class="p">(</span><span class="n">ctx</span><span class="o">=</span><span class="n">mx</span><span class="o">.</span><span class="n">cpu</span><span class="p">(),</span> <span class="n">a</span><span class="o">=</span><span class="n">shape</span><span class="p">,</span> <span class="n">b</span><span class="o">=</span><span class="n">shape</span><span class="p">,</span> <span class="n">c</span><span class="o">=</span><span class="n">shape</span><span class="p">)</span>
<span class="n">add_exec</span><span class="o">.</span><span class="n">forward</span><span class="p">()</span>
<span class="n">dense_add</span> <span class="o">=</span> <span class="n">add_exec</span><span class="o">.</span><span class="n">outputs</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span>
<span class="c1"># The output storage type of elemwise_add(csr, csr) will be inferred as "csr"</span>
<span class="n">csr_add</span> <span class="o">=</span> <span class="n">add_exec</span><span class="o">.</span><span class="n">outputs</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span>
<span class="c1"># The output storage type of elemwise_add(row_sparse, row_sparse) will be inferred as "row_sparse"</span>
<span class="n">rsp_add</span> <span class="o">=</span> <span class="n">add_exec</span><span class="o">.</span><span class="n">outputs</span><span class="p">[</span><span class="mi">2</span><span class="p">]</span>
<span class="p">{</span><span class="s1">'dense_add.stype'</span><span class="p">:</span> <span class="n">dense_add</span><span class="o">.</span><span class="n">stype</span><span class="p">,</span> <span class="s1">'csr_add.stype'</span><span class="p">:</span><span class="n">csr_add</span><span class="o">.</span><span class="n">stype</span><span class="p">,</span> <span class="s1">'rsp_add.stype'</span><span class="p">:</span> <span class="n">rsp_add</span><span class="o">.</span><span class="n">stype</span><span class="p">}</span>
</pre></div>
</div>
</div>
<div class="section" id="storage-type-fallback">
<span id="storage-type-fallback"></span><h3>Storage Type Fallback<a class="headerlink" href="#storage-type-fallback" title="Permalink to this headline"></a></h3>
<p>For operators that don’t specialize in certain sparse arrays, you can still use them with sparse inputs with some performance penalty. In MXNet, dense operators require all inputs and outputs to be in the dense format. If sparse inputs are provided, MXNet will convert sparse inputs into dense ones temporarily so that the dense operator can be used. If sparse outputs are provided, MXNet will convert the dense outputs generated by the dense operator into the provided sparse format. Warning messages will be printed when such a storage fallback event happens.</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="c1"># `log` operator doesn't support sparse inputs at all, but we can fallback on the dense implementation</span>
<span class="n">csr_log</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">log</span><span class="p">(</span><span class="n">a</span><span class="p">)</span>
<span class="c1"># `elemwise_add` operator doesn't support adding csr with row_sparse, but we can fallback on the dense implementation</span>
<span class="n">csr_rsp_add</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">elemwise_add</span><span class="p">(</span><span class="n">b</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span>
<span class="n">fallback_exec</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">Group</span><span class="p">([</span><span class="n">csr_rsp_add</span><span class="p">,</span> <span class="n">csr_log</span><span class="p">])</span><span class="o">.</span><span class="n">simple_bind</span><span class="p">(</span><span class="n">ctx</span><span class="o">=</span><span class="n">mx</span><span class="o">.</span><span class="n">cpu</span><span class="p">(),</span> <span class="n">a</span><span class="o">=</span><span class="n">shape</span><span class="p">,</span> <span class="n">b</span><span class="o">=</span><span class="n">shape</span><span class="p">,</span> <span class="n">c</span><span class="o">=</span><span class="n">shape</span><span class="p">)</span>
<span class="n">fallback_exec</span><span class="o">.</span><span class="n">forward</span><span class="p">()</span>
<span class="n">fallback_add</span> <span class="o">=</span> <span class="n">fallback_exec</span><span class="o">.</span><span class="n">outputs</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span>
<span class="n">fallback_log</span> <span class="o">=</span> <span class="n">fallback_exec</span><span class="o">.</span><span class="n">outputs</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span>
<span class="p">{</span><span class="s1">'fallback_add'</span><span class="p">:</span> <span class="n">fallback_add</span><span class="p">,</span> <span class="s1">'fallback_log'</span><span class="p">:</span> <span class="n">fallback_log</span><span class="p">}</span>
</pre></div>
</div>
</div>
<div class="section" id="inspecting-storage-types-of-the-symbol-graph-work-in-progress">
<span id="inspecting-storage-types-of-the-symbol-graph-work-in-progress"></span><h3>Inspecting Storage Types of the Symbol Graph (Work in Progress)<a class="headerlink" href="#inspecting-storage-types-of-the-symbol-graph-work-in-progress" title="Permalink to this headline"></a></h3>
<p>When the environment variable <code class="docutils literal"><span class="pre">MXNET_INFER_STORAGE_TYPE_VERBOSE_LOGGING</span></code> is set to <code class="docutils literal"><span class="pre">1</span></code>, MXNet will log the storage type information of
operators’ inputs and outputs in the computation graph. For example, we can inspect the storage types of
a linear classification network with sparse operators as follows:</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="c1"># Set logging level for executor</span>
<span class="kn">import</span> <span class="nn">mxnet</span> <span class="kn">as</span> <span class="nn">mx</span>
<span class="kn">import</span> <span class="nn">os</span>
<span class="n">os</span><span class="o">.</span><span class="n">environ</span><span class="p">[</span><span class="s1">'MXNET_INFER_STORAGE_TYPE_VERBOSE_LOGGING'</span><span class="p">]</span> <span class="o">=</span> <span class="s2">"1"</span>
<span class="c1"># Data in csr format</span>
<span class="n">data</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">var</span><span class="p">(</span><span class="s1">'data'</span><span class="p">,</span> <span class="n">stype</span><span class="o">=</span><span class="s1">'csr'</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">(</span><span class="mi">32</span><span class="p">,</span> <span class="mi">10000</span><span class="p">))</span>
<span class="c1"># Weight in row_sparse format</span>
<span class="n">weight</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">var</span><span class="p">(</span><span class="s1">'weight'</span><span class="p">,</span> <span class="n">stype</span><span class="o">=</span><span class="s1">'row_sparse'</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">(</span><span class="mi">10000</span><span class="p">,</span> <span class="mi">2</span><span class="p">))</span>
<span class="n">bias</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">symbol</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="s2">"bias"</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">(</span><span class="mi">2</span><span class="p">,))</span>
<span class="n">dot</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">symbol</span><span class="o">.</span><span class="n">sparse</span><span class="o">.</span><span class="n">dot</span><span class="p">(</span><span class="n">data</span><span class="p">,</span> <span class="n">weight</span><span class="p">)</span>
<span class="n">pred</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">symbol</span><span class="o">.</span><span class="n">broadcast_add</span><span class="p">(</span><span class="n">dot</span><span class="p">,</span> <span class="n">bias</span><span class="p">)</span>
<span class="n">y</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">symbol</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="s2">"label"</span><span class="p">)</span>
<span class="n">output</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">symbol</span><span class="o">.</span><span class="n">SoftmaxOutput</span><span class="p">(</span><span class="n">data</span><span class="o">=</span><span class="n">pred</span><span class="p">,</span> <span class="n">label</span><span class="o">=</span><span class="n">y</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="s2">"output"</span><span class="p">)</span>
<span class="n">executor</span> <span class="o">=</span> <span class="n">output</span><span class="o">.</span><span class="n">simple_bind</span><span class="p">(</span><span class="n">ctx</span><span class="o">=</span><span class="n">mx</span><span class="o">.</span><span class="n">cpu</span><span class="p">())</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="training-with-module-apis">
<span id="training-with-module-apis"></span><h2>Training with Module APIs<a class="headerlink" href="#training-with-module-apis" title="Permalink to this headline"></a></h2>
<p>In the following section we’ll walk through how one can implement <strong>linear regression</strong> using sparse symbols and sparse optimizers.</p>
<p>The function you will explore is: <em>y = x<sub>1</sub> + 2x<sub>2</sub> + ... 100x<sub>100</sub></em>, where <em>(x<sub>1</sub>,x<sub>2</sub>, ..., x<sub>100</sub>)</em> are input features and <em>y</em> is the corresponding label.</p>
<div class="section" id="preparing-the-data">
<span id="preparing-the-data"></span><h3>Preparing the Data<a class="headerlink" href="#preparing-the-data" title="Permalink to this headline"></a></h3>
<p>In MXNet, both <a class="reference external" href="https://mxnet.incubator.apache.org/versions/master/api/python/io.html#mxnet.io.LibSVMIter">mx.io.LibSVMIter</a>
and <a class="reference external" href="https://mxnet.incubator.apache.org/versions/master/api/python/io.html#mxnet.io.NDArrayIter">mx.io.NDArrayIter</a>
support loading sparse data in CSR format. In this example, we’ll use the <code class="docutils literal"><span class="pre">NDArrayIter</span></code>.</p>
<p>You may see some warnings from SciPy. You don’t need to worry about those for this example.</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="c1"># Random training data</span>
<span class="n">feature_dimension</span> <span class="o">=</span> <span class="mi">100</span>
<span class="n">train_data</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">test_utils</span><span class="o">.</span><span class="n">rand_ndarray</span><span class="p">((</span><span class="mi">1000</span><span class="p">,</span> <span class="n">feature_dimension</span><span class="p">),</span> <span class="s1">'csr'</span><span class="p">,</span> <span class="mf">0.01</span><span class="p">)</span>
<span class="n">target_weight</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">arange</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="n">feature_dimension</span> <span class="o">+</span> <span class="mi">1</span><span class="p">)</span><span class="o">.</span><span class="n">reshape</span><span class="p">((</span><span class="n">feature_dimension</span><span class="p">,</span> <span class="mi">1</span><span class="p">))</span>
<span class="n">train_label</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">dot</span><span class="p">(</span><span class="n">train_data</span><span class="p">,</span> <span class="n">target_weight</span><span class="p">)</span>
<span class="n">batch_size</span> <span class="o">=</span> <span class="mi">1</span>
<span class="n">train_iter</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">io</span><span class="o">.</span><span class="n">NDArrayIter</span><span class="p">(</span><span class="n">train_data</span><span class="p">,</span> <span class="n">train_label</span><span class="p">,</span> <span class="n">batch_size</span><span class="p">,</span> <span class="n">last_batch_handle</span><span class="o">=</span><span class="s1">'discard'</span><span class="p">,</span> <span class="n">label_name</span><span class="o">=</span><span class="s1">'label'</span><span class="p">)</span>
</pre></div>
</div>
</div>
<div class="section" id="defining-the-model">
<span id="defining-the-model"></span><h3>Defining the Model<a class="headerlink" href="#defining-the-model" title="Permalink to this headline"></a></h3>
<p>Below is an example of a linear regression model specifying the storage type of the variables.</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">initializer</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">initializer</span><span class="o">.</span><span class="n">Normal</span><span class="p">(</span><span class="n">sigma</span><span class="o">=</span><span class="mf">0.01</span><span class="p">)</span>
<span class="n">X</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="s1">'data'</span><span class="p">,</span> <span class="n">stype</span><span class="o">=</span><span class="s1">'csr'</span><span class="p">)</span>
<span class="n">Y</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">symbol</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="s1">'label'</span><span class="p">)</span>
<span class="n">weight</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">symbol</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="s1">'weight'</span><span class="p">,</span> <span class="n">stype</span><span class="o">=</span><span class="s1">'row_sparse'</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">(</span><span class="n">feature_dimension</span><span class="p">,</span> <span class="mi">1</span><span class="p">),</span> <span class="n">init</span><span class="o">=</span><span class="n">initializer</span><span class="p">)</span>
<span class="n">bias</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">symbol</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="s1">'bias'</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="p">))</span>
<span class="n">pred</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">broadcast_add</span><span class="p">(</span><span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">sparse</span><span class="o">.</span><span class="n">dot</span><span class="p">(</span><span class="n">X</span><span class="p">,</span> <span class="n">weight</span><span class="p">),</span> <span class="n">bias</span><span class="p">)</span>
<span class="n">lro</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">sym</span><span class="o">.</span><span class="n">LinearRegressionOutput</span><span class="p">(</span><span class="n">data</span><span class="o">=</span><span class="n">pred</span><span class="p">,</span> <span class="n">label</span><span class="o">=</span><span class="n">Y</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="s2">"lro"</span><span class="p">)</span>
</pre></div>
</div>
<p>The above network uses the following symbols:</p>
<ol class="simple">
<li><code class="docutils literal"><span class="pre">Variable</span> <span class="pre">X</span></code>: The placeholder for sparse data inputs. The <code class="docutils literal"><span class="pre">csr</span></code> stype indicates that the array to hold is in CSR format.</li>
<li><code class="docutils literal"><span class="pre">Variable</span> <span class="pre">Y</span></code>: The placeholder for dense labels.</li>
<li><code class="docutils literal"><span class="pre">Variable</span> <span class="pre">weight</span></code>: The placeholder for the weight to learn. The <code class="docutils literal"><span class="pre">stype</span></code> of weight is specified as <code class="docutils literal"><span class="pre">row_sparse</span></code> so that it is initialized as RowSparseNDArray,
and the optimizer will perform sparse update rules on it. The <code class="docutils literal"><span class="pre">init</span></code> attribute specifies what initializer to use for this variable.</li>
<li><code class="docutils literal"><span class="pre">Variable</span> <span class="pre">bias</span></code>: The placeholder for the bias to learn.</li>
<li><code class="docutils literal"><span class="pre">sparse.dot</span></code>: The dot product operation of <code class="docutils literal"><span class="pre">X</span></code> and <code class="docutils literal"><span class="pre">weight</span></code>. The sparse implementation will be invoked to handle <code class="docutils literal"><span class="pre">csr</span></code> and <code class="docutils literal"><span class="pre">row_sparse</span></code> inputs.</li>
<li><code class="docutils literal"><span class="pre">broadcast_add</span></code>: The broadcasting add operation to apply <code class="docutils literal"><span class="pre">bias</span></code>.</li>
<li><code class="docutils literal"><span class="pre">LinearRegressionOutput</span></code>: The output layer which computes <em>l2</em> loss against its input and the labels provided to it.</li>
</ol>
</div>
<div class="section" id="training-the-model">
<span id="training-the-model"></span><h3>Training the model<a class="headerlink" href="#training-the-model" title="Permalink to this headline"></a></h3>
<p>Once we have defined the model structure, the next step is to create a module and initialize the parameters and optimizer.</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="c1"># Create module</span>
<span class="n">mod</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">mod</span><span class="o">.</span><span class="n">Module</span><span class="p">(</span><span class="n">symbol</span><span class="o">=</span><span class="n">lro</span><span class="p">,</span> <span class="n">data_names</span><span class="o">=</span><span class="p">[</span><span class="s1">'data'</span><span class="p">],</span> <span class="n">label_names</span><span class="o">=</span><span class="p">[</span><span class="s1">'label'</span><span class="p">])</span>
<span class="c1"># Allocate memory by giving the input data and label shapes</span>
<span class="n">mod</span><span class="o">.</span><span class="n">bind</span><span class="p">(</span><span class="n">data_shapes</span><span class="o">=</span><span class="n">train_iter</span><span class="o">.</span><span class="n">provide_data</span><span class="p">,</span> <span class="n">label_shapes</span><span class="o">=</span><span class="n">train_iter</span><span class="o">.</span><span class="n">provide_label</span><span class="p">)</span>
<span class="c1"># Initialize parameters by random numbers</span>
<span class="n">mod</span><span class="o">.</span><span class="n">init_params</span><span class="p">(</span><span class="n">initializer</span><span class="o">=</span><span class="n">initializer</span><span class="p">)</span>
<span class="c1"># Use SGD as the optimizer, which performs sparse update on "row_sparse" weight</span>
<span class="n">sgd</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">optimizer</span><span class="o">.</span><span class="n">SGD</span><span class="p">(</span><span class="n">learning_rate</span><span class="o">=</span><span class="mf">0.05</span><span class="p">,</span> <span class="n">rescale_grad</span><span class="o">=</span><span class="mf">1.0</span><span class="o">/</span><span class="n">batch_size</span><span class="p">,</span> <span class="n">momentum</span><span class="o">=</span><span class="mf">0.9</span><span class="p">)</span>
<span class="n">mod</span><span class="o">.</span><span class="n">init_optimizer</span><span class="p">(</span><span class="n">optimizer</span><span class="o">=</span><span class="n">sgd</span><span class="p">)</span>
</pre></div>
</div>
<p>Finally, we train the parameters of the model to fit the training data by using the <code class="docutils literal"><span class="pre">forward</span></code>, <code class="docutils literal"><span class="pre">backward</span></code>, and <code class="docutils literal"><span class="pre">update</span></code> methods in Module.</p>
<div class="highlight-python"><div class="highlight"><pre><span></span><span class="c1"># Use mean square error as the metric</span>
<span class="n">metric</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">metric</span><span class="o">.</span><span class="n">create</span><span class="p">(</span><span class="s1">'MSE'</span><span class="p">)</span>
<span class="c1"># Train 10 epochs</span>
<span class="k">for</span> <span class="n">epoch</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">10</span><span class="p">):</span>
<span class="n">train_iter</span><span class="o">.</span><span class="n">reset</span><span class="p">()</span>
<span class="n">metric</span><span class="o">.</span><span class="n">reset</span><span class="p">()</span>
<span class="k">for</span> <span class="n">batch</span> <span class="ow">in</span> <span class="n">train_iter</span><span class="p">:</span>
<span class="n">mod</span><span class="o">.</span><span class="n">forward</span><span class="p">(</span><span class="n">batch</span><span class="p">,</span> <span class="n">is_train</span><span class="o">=</span><span class="bp">True</span><span class="p">)</span> <span class="c1"># compute predictions</span>
<span class="n">mod</span><span class="o">.</span><span class="n">update_metric</span><span class="p">(</span><span class="n">metric</span><span class="p">,</span> <span class="n">batch</span><span class="o">.</span><span class="n">label</span><span class="p">)</span> <span class="c1"># accumulate prediction accuracy</span>
<span class="n">mod</span><span class="o">.</span><span class="n">backward</span><span class="p">()</span> <span class="c1"># compute gradients</span>
<span class="n">mod</span><span class="o">.</span><span class="n">update</span><span class="p">()</span> <span class="c1"># update parameters</span>
<span class="k">print</span><span class="p">(</span><span class="s1">'Epoch </span><span class="si">%d</span><span class="s1">, Metric = </span><span class="si">%s</span><span class="s1">'</span> <span class="o">%</span> <span class="p">(</span><span class="n">epoch</span><span class="p">,</span> <span class="n">metric</span><span class="o">.</span><span class="n">get</span><span class="p">()))</span>
</pre></div>
</div>
</div>
<div class="section" id="training-the-model-with-multiple-machines">
<span id="training-the-model-with-multiple-machines"></span><h3>Training the model with multiple machines<a class="headerlink" href="#training-the-model-with-multiple-machines" title="Permalink to this headline"></a></h3>
<p>To train a sparse model with multiple machines, please refer to the example in <a class="reference external" href="https://github.com/apache/incubator-mxnet/tree/0.12.0/example/sparse">mxnet/example/sparse/</a></p>
<div class="btn-group" role="group">
<div class="download-btn"><a download="train.ipynb" href="train.ipynb"><span class="glyphicon glyphicon-download-alt"></span> train.ipynb</a></div></div></div>
</div>
</div>
</div>
</div>
<div aria-label="main navigation" class="sphinxsidebar rightsidebar" role="navigation">
<div class="sphinxsidebarwrapper">
<h3><a href="../../index.html">Table Of Contents</a></h3>
<ul>
<li><a class="reference internal" href="#">Train a Linear Regression Model with Sparse Symbols</a><ul>
<li><a class="reference internal" href="#prerequisites">Prerequisites</a></li>
<li><a class="reference internal" href="#variables">Variables</a><ul>
<li><a class="reference internal" href="#variable-storage-types">Variable Storage Types</a></li>
<li><a class="reference internal" href="#bind-with-sparse-arrays">Bind with Sparse Arrays</a></li>
</ul>
</li>
<li><a class="reference internal" href="#symbol-composition-and-storage-type-inference">Symbol Composition and Storage Type Inference</a><ul>
<li><a class="reference internal" href="#basic-symbol-composition">Basic Symbol Composition</a></li>
<li><a class="reference internal" href="#storage-type-inference">Storage Type Inference</a></li>
<li><a class="reference internal" href="#storage-type-fallback">Storage Type Fallback</a></li>
<li><a class="reference internal" href="#inspecting-storage-types-of-the-symbol-graph-work-in-progress">Inspecting Storage Types of the Symbol Graph (Work in Progress)</a></li>
</ul>
</li>
<li><a class="reference internal" href="#training-with-module-apis">Training with Module APIs</a><ul>
<li><a class="reference internal" href="#preparing-the-data">Preparing the Data</a></li>
<li><a class="reference internal" href="#defining-the-model">Defining the Model</a></li>
<li><a class="reference internal" href="#training-the-model">Training the model</a></li>
<li><a class="reference internal" href="#training-the-model-with-multiple-machines">Training the model with multiple machines</a></li>
</ul>
</li>
</ul>
</li>
</ul>
</div>
</div>
</div><div class="footer">
<div class="section-disclaimer">
<div class="container">
<div>
<img height="60" src="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/image/apache_incubator_logo.png"/>
<p>
Apache MXNet is an effort undergoing incubation at The Apache Software Foundation (ASF), <strong>sponsored by the <i>Apache Incubator</i></strong>. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF.
</p>
<p>
"Copyright © 2017, The Apache Software Foundation
Apache MXNet, MXNet, Apache, the Apache feather, and the Apache MXNet project logo are either registered trademarks or trademarks of the Apache Software Foundation."
</p>
</div>
</div>
</div>
</div> <!-- pagename != index -->
</div>
<script crossorigin="anonymous" integrity="sha384-0mSbJDEHialfmuBBQP6A4Qrprq5OVfW37PRR3j5ELqxss1yVqOtnepnHVP9aJ7xS" src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/js/bootstrap.min.js"></script>
<script src="../../_static/js/sidebar.js" type="text/javascript"></script>
<script src="../../_static/js/search.js" type="text/javascript"></script>
<script src="../../_static/js/navbar.js" type="text/javascript"></script>
<script src="../../_static/js/clipboard.min.js" type="text/javascript"></script>
<script src="../../_static/js/copycode.js" type="text/javascript"></script>
<script src="../../_static/js/page.js" type="text/javascript"></script>
<script type="text/javascript">
$('body').ready(function () {
$('body').css('visibility', 'visible');
});
</script>
</body>
</html>