| <!DOCTYPE html> |
| |
| <html lang="en"> |
| <head> |
| <meta charset="utf-8"/> |
| <meta content="IE=edge" http-equiv="X-UA-Compatible"/> |
| <meta content="width=device-width, initial-scale=1" name="viewport"/> |
| <meta content="NDArray - Imperative tensor operations on CPU/GPU" property="og:title"> |
| <meta content="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/image/og-logo.png" property="og:image"> |
| <meta content="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/image/og-logo.png" property="og:image:secure_url"> |
| <meta content="NDArray - Imperative tensor operations on CPU/GPU" property="og:description"/> |
| <title>NDArray - Imperative tensor operations on CPU/GPU — mxnet documentation</title> |
| <link crossorigin="anonymous" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css" integrity="sha384-1q8mTJOASx8j1Au+a5WDVnPi2lkFfwwEAa8hDDdjZlpLegxhjVME1fgjWPGmkzs7" rel="stylesheet"/> |
| <link href="https://maxcdn.bootstrapcdn.com/font-awesome/4.5.0/css/font-awesome.min.css" rel="stylesheet"/> |
| <link href="../../_static/basic.css" rel="stylesheet" type="text/css"> |
| <link href="../../_static/pygments.css" rel="stylesheet" type="text/css"> |
| <link href="../../_static/mxnet.css" rel="stylesheet" type="text/css"/> |
| <script type="text/javascript"> |
| var DOCUMENTATION_OPTIONS = { |
| URL_ROOT: '../../', |
| VERSION: '', |
| COLLAPSE_INDEX: false, |
| FILE_SUFFIX: '.html', |
| HAS_SOURCE: true, |
| SOURCELINK_SUFFIX: '.txt' |
| }; |
| </script> |
| <script src="https://code.jquery.com/jquery-1.11.1.min.js" type="text/javascript"></script> |
| <script src="../../_static/underscore.js" type="text/javascript"></script> |
| <script src="../../_static/searchtools_custom.js" type="text/javascript"></script> |
| <script src="../../_static/doctools.js" type="text/javascript"></script> |
| <script src="../../_static/selectlang.js" type="text/javascript"></script> |
| <script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"></script> |
| <script type="text/javascript"> jQuery(function() { Search.loadIndex("/versions/0.12.1/searchindex.js"); Search.init();}); </script> |
| <script> |
| (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ |
| (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new |
| Date();a=s.createElement(o), |
| m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) |
| })(window,document,'script','https://www.google-analytics.com/analytics.js','ga'); |
| |
| ga('create', 'UA-96378503-1', 'auto'); |
| ga('send', 'pageview'); |
| |
| </script> |
| <!-- --> |
| <!-- <script type="text/javascript" src="../../_static/jquery.js"></script> --> |
| <!-- --> |
| <!-- <script type="text/javascript" src="../../_static/underscore.js"></script> --> |
| <!-- --> |
| <!-- <script type="text/javascript" src="../../_static/doctools.js"></script> --> |
| <!-- --> |
| <!-- <script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.0/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script> --> |
| <!-- --> |
| <link href="../../genindex.html" rel="index" title="Index"> |
| <link href="../../search.html" rel="search" title="Search"/> |
| <link href="../index.html" rel="up" title="Tutorials"/> |
| <link href="symbol.html" rel="next" title="Symbol - Neural network graphs and auto-differentiation"/> |
| <link href="../index.html" rel="prev" title="Tutorials"/> |
| <link href="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/image/mxnet-icon.png" rel="icon" type="image/png"/> |
| </link></link></link></meta></meta></meta></head> |
| <body background="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/image/mxnet-background-compressed.jpeg" role="document"> |
| <div class="content-block"><div class="navbar navbar-fixed-top"> |
| <div class="container" id="navContainer"> |
| <div class="innder" id="header-inner"> |
| <h1 id="logo-wrap"> |
| <a href="../../" id="logo"><img src="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/image/mxnet_logo.png"/></a> |
| </h1> |
| <nav class="nav-bar" id="main-nav"> |
| <a class="main-nav-link" href="/versions/0.12.1/install/index.html">Install</a> |
| <span id="dropdown-menu-position-anchor"> |
| <a aria-expanded="true" aria-haspopup="true" class="main-nav-link dropdown-toggle" data-toggle="dropdown" href="#" role="button">Gluon <span class="caret"></span></a> |
| <ul class="dropdown-menu navbar-menu" id="package-dropdown-menu"> |
| <li><a class="main-nav-link" href="/versions/0.12.1/tutorials/gluon/gluon.html">About</a></li> |
| <li><a class="main-nav-link" href="https://www.d2l.ai/">Dive into Deep Learning</a></li> |
| <li><a class="main-nav-link" href="https://gluon-cv.mxnet.io">GluonCV Toolkit</a></li> |
| <li><a class="main-nav-link" href="https://gluon-nlp.mxnet.io/">GluonNLP Toolkit</a></li> |
| </ul> |
| </span> |
| <span id="dropdown-menu-position-anchor"> |
| <a aria-expanded="true" aria-haspopup="true" class="main-nav-link dropdown-toggle" data-toggle="dropdown" href="#" role="button">API <span class="caret"></span></a> |
| <ul class="dropdown-menu navbar-menu" id="package-dropdown-menu"> |
| <li><a class="main-nav-link" href="/versions/0.12.1/api/python/index.html">Python</a></li> |
| <li><a class="main-nav-link" href="/versions/0.12.1/api/c++/index.html">C++</a></li> |
| <li><a class="main-nav-link" href="/versions/0.12.1/api/julia/index.html">Julia</a></li> |
| <li><a class="main-nav-link" href="/versions/0.12.1/api/perl/index.html">Perl</a></li> |
| <li><a class="main-nav-link" href="/versions/0.12.1/api/r/index.html">R</a></li> |
| <li><a class="main-nav-link" href="/versions/0.12.1/api/scala/index.html">Scala</a></li> |
| </ul> |
| </span> |
| <span id="dropdown-menu-position-anchor-docs"> |
| <a aria-expanded="true" aria-haspopup="true" class="main-nav-link dropdown-toggle" data-toggle="dropdown" href="#" role="button">Docs <span class="caret"></span></a> |
| <ul class="dropdown-menu navbar-menu" id="package-dropdown-menu-docs"> |
| <li><a class="main-nav-link" href="/versions/0.12.1/faq/index.html">FAQ</a></li> |
| <li><a class="main-nav-link" href="/versions/0.12.1/tutorials/index.html">Tutorials</a> |
| <li><a class="main-nav-link" href="https://github.com/apache/incubator-mxnet/tree/0.12.1/example">Examples</a></li> |
| <li><a class="main-nav-link" href="/versions/0.12.1/architecture/index.html">Architecture</a></li> |
| <li><a class="main-nav-link" href="https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+Home">Developer Wiki</a></li> |
| <li><a class="main-nav-link" href="/versions/0.12.1/model_zoo/index.html">Model Zoo</a></li> |
| <li><a class="main-nav-link" href="https://github.com/onnx/onnx-mxnet">ONNX</a></li> |
| </li></ul> |
| </span> |
| <span id="dropdown-menu-position-anchor-community"> |
| <a aria-expanded="true" aria-haspopup="true" class="main-nav-link dropdown-toggle" data-toggle="dropdown" href="#" role="button">Community <span class="caret"></span></a> |
| <ul class="dropdown-menu navbar-menu" id="package-dropdown-menu-community"> |
| <li><a class="main-nav-link" href="http://discuss.mxnet.io">Forum</a></li> |
| <li><a class="main-nav-link" href="https://github.com/apache/incubator-mxnet/tree/0.12.1">Github</a></li> |
| <li><a class="main-nav-link" href="/versions/0.12.1/community/contribute.html">Contribute</a></li> |
| <li><a class="main-nav-link" href="/versions/0.12.1/community/powered_by.html">Powered By</a></li> |
| </ul> |
| </span> |
| <span id="dropdown-menu-position-anchor-version" style="position: relative"><a href="#" class="main-nav-link dropdown-toggle" data-toggle="dropdown" role="button" aria-haspopup="true" aria-expanded="true">0.12.1<span class="caret"></span></a><ul id="package-dropdown-menu" class="dropdown-menu"><li><a href="/">master</a></li><li><a href="/versions/1.7.0/">1.7.0</a></li><li><a href=/versions/1.6.0/>1.6.0</a></li><li><a href=/versions/1.5.0/index.html>1.5.0</a></li><li><a href=/versions/1.4.1/index.html>1.4.1</a></li><li><a href=/versions/1.3.1/index.html>1.3.1</a></li><li><a href=/versions/1.2.1/index.html>1.2.1</a></li><li><a href=/versions/1.1.0/index.html>1.1.0</a></li><li><a href=/versions/1.0.0/index.html>1.0.0</a></li><li><a href=/versions/0.12.1/index.html>0.12.1</a></li><li><a href=/versions/0.11.0/index.html>0.11.0</a></li></ul></span></nav> |
| <script> function getRootPath(){ return "../../" } </script> |
| <div class="burgerIcon dropdown"> |
| <a class="dropdown-toggle" data-toggle="dropdown" href="#" role="button">☰</a> |
| <ul class="dropdown-menu" id="burgerMenu"> |
| <li><a href="/versions/0.12.1/install/index.html">Install</a></li> |
| <li><a class="main-nav-link" href="/versions/0.12.1/tutorials/index.html">Tutorials</a></li> |
| <li class="dropdown-submenu dropdown"> |
| <a aria-expanded="true" aria-haspopup="true" class="dropdown-toggle burger-link" data-toggle="dropdown" href="#" tabindex="-1">Gluon</a> |
| <ul class="dropdown-menu navbar-menu" id="package-dropdown-menu"> |
| <li><a class="main-nav-link" href="/versions/0.12.1/tutorials/gluon/gluon.html">About</a></li> |
| <li><a class="main-nav-link" href="http://gluon.mxnet.io">The Straight Dope (Tutorials)</a></li> |
| <li><a class="main-nav-link" href="https://gluon-cv.mxnet.io">GluonCV Toolkit</a></li> |
| <li><a class="main-nav-link" href="https://gluon-nlp.mxnet.io/">GluonNLP Toolkit</a></li> |
| </ul> |
| </li> |
| <li class="dropdown-submenu"> |
| <a aria-expanded="true" aria-haspopup="true" class="dropdown-toggle burger-link" data-toggle="dropdown" href="#" tabindex="-1">API</a> |
| <ul class="dropdown-menu"> |
| <li><a class="main-nav-link" href="/versions/0.12.1/api/python/index.html">Python</a></li> |
| <li><a class="main-nav-link" href="/versions/0.12.1/api/c++/index.html">C++</a></li> |
| <li><a class="main-nav-link" href="/versions/0.12.1/api/julia/index.html">Julia</a></li> |
| <li><a class="main-nav-link" href="/versions/0.12.1/api/perl/index.html">Perl</a></li> |
| <li><a class="main-nav-link" href="/versions/0.12.1/api/r/index.html">R</a></li> |
| <li><a class="main-nav-link" href="/versions/0.12.1/api/scala/index.html">Scala</a></li> |
| </ul> |
| </li> |
| <li class="dropdown-submenu"> |
| <a aria-expanded="true" aria-haspopup="true" class="dropdown-toggle burger-link" data-toggle="dropdown" href="#" tabindex="-1">Docs</a> |
| <ul class="dropdown-menu"> |
| <li><a href="/versions/0.12.1/faq/index.html" tabindex="-1">FAQ</a></li> |
| <li><a href="/versions/0.12.1/tutorials/index.html" tabindex="-1">Tutorials</a></li> |
| <li><a href="https://github.com/apache/incubator-mxnet/tree/0.12.1/example" tabindex="-1">Examples</a></li> |
| <li><a href="/versions/0.12.1/architecture/index.html" tabindex="-1">Architecture</a></li> |
| <li><a href="https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+Home" tabindex="-1">Developer Wiki</a></li> |
| <li><a href="/versions/0.12.1/model_zoo/index.html" tabindex="-1">Gluon Model Zoo</a></li> |
| <li><a href="https://github.com/onnx/onnx-mxnet" tabindex="-1">ONNX</a></li> |
| </ul> |
| </li> |
| <li class="dropdown-submenu dropdown"> |
| <a aria-haspopup="true" class="dropdown-toggle burger-link" data-toggle="dropdown" href="#" role="button" tabindex="-1">Community</a> |
| <ul class="dropdown-menu"> |
| <li><a href="http://discuss.mxnet.io" tabindex="-1">Forum</a></li> |
| <li><a href="https://github.com/apache/incubator-mxnet/tree/0.12.1" tabindex="-1">Github</a></li> |
| <li><a href="/versions/0.12.1/community/contribute.html" tabindex="-1">Contribute</a></li> |
| <li><a href="/versions/0.12.1/community/powered_by.html" tabindex="-1">Powered By</a></li> |
| </ul> |
| </li> |
| <li id="dropdown-menu-position-anchor-version-mobile" class="dropdown-submenu" style="position: relative"><a href="#" tabindex="-1">0.12.1</a><ul class="dropdown-menu"><li><a tabindex="-1" href=/>master</a></li><li><a tabindex="-1" href=/versions/1.6.0/>1.6.0</a></li><li><a tabindex="-1" href=/versions/1.5.0/index.html>1.5.0</a></li><li><a tabindex="-1" href=/versions/1.4.1/index.html>1.4.1</a></li><li><a tabindex="-1" href=/versions/1.3.1/index.html>1.3.1</a></li><li><a tabindex="-1" href=/versions/1.2.1/index.html>1.2.1</a></li><li><a tabindex="-1" href=/versions/1.1.0/index.html>1.1.0</a></li><li><a tabindex="-1" href=/versions/1.0.0/index.html>1.0.0</a></li><li><a tabindex="-1" href=/versions/0.12.1/index.html>0.12.1</a></li><li><a tabindex="-1" href=/versions/0.11.0/index.html>0.11.0</a></li></ul></li></ul> |
| </div> |
| <div class="plusIcon dropdown"> |
| <a class="dropdown-toggle" data-toggle="dropdown" href="#" role="button"><span aria-hidden="true" class="glyphicon glyphicon-plus"></span></a> |
| <ul class="dropdown-menu dropdown-menu-right" id="plusMenu"></ul> |
| </div> |
| <div id="search-input-wrap"> |
| <form action="../../search.html" autocomplete="off" class="" method="get" role="search"> |
| <div class="form-group inner-addon left-addon"> |
| <i class="glyphicon glyphicon-search"></i> |
| <input class="form-control" name="q" placeholder="Search" type="text"/> |
| </div> |
| <input name="check_keywords" type="hidden" value="yes"> |
| <input name="area" type="hidden" value="default"/> |
| </input></form> |
| <div id="search-preview"></div> |
| </div> |
| <div id="searchIcon"> |
| <span aria-hidden="true" class="glyphicon glyphicon-search"></span> |
| </div> |
| <!-- <div id="lang-select-wrap"> --> |
| <!-- <label id="lang-select-label"> --> |
| <!-- <\!-- <i class="fa fa-globe"></i> -\-> --> |
| <!-- <span></span> --> |
| <!-- </label> --> |
| <!-- <select id="lang-select"> --> |
| <!-- <option value="en">Eng</option> --> |
| <!-- <option value="zh">中文</option> --> |
| <!-- </select> --> |
| <!-- </div> --> |
| <!-- <a id="mobile-nav-toggle"> |
| <span class="mobile-nav-toggle-bar"></span> |
| <span class="mobile-nav-toggle-bar"></span> |
| <span class="mobile-nav-toggle-bar"></span> |
| </a> --> |
| </div> |
| </div> |
| </div> |
| <script type="text/javascript"> |
| $('body').css('background', 'white'); |
| </script> |
| <div class="container"> |
| <div class="row"> |
| <div aria-label="main navigation" class="sphinxsidebar leftsidebar" role="navigation"> |
| <div class="sphinxsidebarwrapper"> |
| <ul class="current"> |
| <li class="toctree-l1"><a class="reference internal" href="../../api/python/index.html">Python Documents</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../../api/r/index.html">R Documents</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../../api/julia/index.html">Julia Documents</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../../api/c++/index.html">C++ Documents</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../../api/scala/index.html">Scala Documents</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../../api/perl/index.html">Perl Documents</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../../faq/index.html">HowTo Documents</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../../architecture/index.html">System Documents</a></li> |
| <li class="toctree-l1 current"><a class="reference internal" href="../index.html">Tutorials</a><ul class="current"> |
| <li class="toctree-l2 current"><a class="reference internal" href="../index.html#python">Python</a><ul class="current"> |
| <li class="toctree-l3 current"><a class="reference internal" href="../index.html#basic">Basic</a><ul class="current"> |
| <li class="toctree-l4 current"><a class="current reference internal" href="#">NDArray - Imperative tensor operations on CPU/GPU</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="symbol.html">Symbol - Neural network graphs and auto-differentiation</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="module.html">Module - Neural network training and inference</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="data.html">Iterators - Loading data</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../index.html#training-and-inference">Training and Inference</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../index.html#sparse-ndarray">Sparse NDArray</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../index.html#contributing-tutorials">Contributing Tutorials</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l1"><a class="reference internal" href="../../community/index.html">Community</a></li> |
| </ul> |
| </div> |
| </div> |
| <div class="content"> |
| <div class="page-tracker"></div> |
| <div class="section" id="ndarray-imperative-tensor-operations-on-cpu-gpu"> |
| <span id="ndarray-imperative-tensor-operations-on-cpu-gpu"></span><h1>NDArray - Imperative tensor operations on CPU/GPU<a class="headerlink" href="#ndarray-imperative-tensor-operations-on-cpu-gpu" title="Permalink to this headline">¶</a></h1> |
| <p>In <em>MXNet</em>, <code class="docutils literal"><span class="pre">NDArray</span></code> is the core data structure for all mathematical |
| computations. An <code class="docutils literal"><span class="pre">NDArray</span></code> represents a multidimensional, fixed-size homogenous |
| array. If you’re familiar with the scientific computing python package |
| <a class="reference external" href="http://www.numpy.org/">NumPy</a>, you might notice that <code class="docutils literal"><span class="pre">mxnet.ndarray</span></code> is similar |
| to <code class="docutils literal"><span class="pre">numpy.ndarray</span></code>. Like the corresponding NumPy data structure, MXNet’s |
| <code class="docutils literal"><span class="pre">NDArray</span></code> enables imperative computation.</p> |
| <p>So you might wonder, why not just use NumPy? MXNet offers two compelling |
| advantages. First, MXNet’s <code class="docutils literal"><span class="pre">NDArray</span></code> supports fast execution on a wide range of |
| hardware configurations, including CPU, GPU, and multi-GPU machines. <em>MXNet</em> |
| also scales to distributed systems in the cloud. Second, MXNet’s <code class="docutils literal"><span class="pre">NDArray</span></code> |
| executes code lazily, allowing it to automatically parallelize multiple |
| operations across the available hardware.</p> |
| <p>An <code class="docutils literal"><span class="pre">NDArray</span></code> is a multidimensional array of numbers with the same type. We |
| could represent the coordinates of a point in 3D space, e.g. <code class="docutils literal"><span class="pre">[2,</span> <span class="pre">1,</span> <span class="pre">6]</span></code> as a 1D |
| array with shape (3). Similarly, we could represent a 2D array. Below, we |
| present an array with length 2 along the first axis and length 3 along the |
| second axis.</p> |
| <div class="highlight-default"><div class="highlight"><pre><span></span><span class="p">[[</span><span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">2</span><span class="p">]</span> |
| <span class="p">[</span><span class="mi">3</span><span class="p">,</span> <span class="mi">4</span><span class="p">,</span> <span class="mi">5</span><span class="p">]]</span> |
| </pre></div> |
| </div> |
| <p>Note that here the use of “dimension” is overloaded. When we say a 2D array, we |
| mean an array with 2 axes, not an array with two components.</p> |
| <p>Each NDArray supports some important attributes that you’ll often want to query:</p> |
| <ul class="simple"> |
| <li><strong>ndarray.shape</strong>: The dimensions of the array. It is a tuple of integers |
| indicating the length of the array along each axis. For a matrix with <code class="docutils literal"><span class="pre">n</span></code> rows |
| and <code class="docutils literal"><span class="pre">m</span></code> columns, its <code class="docutils literal"><span class="pre">shape</span></code> will be <code class="docutils literal"><span class="pre">(n,</span> <span class="pre">m)</span></code>.</li> |
| <li><strong>ndarray.dtype</strong>: A <code class="docutils literal"><span class="pre">numpy</span></code> <em>type</em> object describing the type of its |
| elements.</li> |
| <li><strong>ndarray.size</strong>: The total number of components in the array - equal to the |
| product of the components of its <code class="docutils literal"><span class="pre">shape</span></code></li> |
| <li><strong>ndarray.context</strong>: The device on which this array is stored, e.g. <code class="docutils literal"><span class="pre">cpu()</span></code> or |
| <code class="docutils literal"><span class="pre">gpu(1)</span></code>.</li> |
| </ul> |
| <div class="section" id="prerequisites"> |
| <span id="prerequisites"></span><h2>Prerequisites<a class="headerlink" href="#prerequisites" title="Permalink to this headline">¶</a></h2> |
| <p>To complete this tutorial, we need:</p> |
| <ul> |
| <li><p class="first">MXNet. See the instructions for your operating system in <a class="reference external" href="/versions/0.12.1/install/index.html">Setup and Installation</a></p> |
| </li> |
| <li><p class="first"><a class="reference external" href="http://jupyter.org/">Jupyter</a></p> |
| <div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">pip</span> <span class="n">install</span> <span class="n">jupyter</span> |
| </pre></div> |
| </div> |
| </li> |
| <li><p class="first">GPUs - A section of this tutorial uses GPUs. If you don’t have GPUs on your |
| machine, simply set the variable gpu_device (set in the GPUs section of this |
| tutorial) to mx.cpu().</p> |
| </li> |
| </ul> |
| </div> |
| <div class="section" id="array-creation"> |
| <span id="array-creation"></span><h2>Array Creation<a class="headerlink" href="#array-creation" title="Permalink to this headline">¶</a></h2> |
| <p>There are a few different ways to create an <code class="docutils literal"><span class="pre">NDArray</span></code>.</p> |
| <ul class="simple"> |
| <li>We can create an NDArray from a regular Python list or tuple by using the <code class="docutils literal"><span class="pre">array</span></code> function:</li> |
| </ul> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">mxnet</span> <span class="kn">as</span> <span class="nn">mx</span> |
| <span class="c1"># create a 1-dimensional array with a python list</span> |
| <span class="n">a</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">1</span><span class="p">,</span><span class="mi">2</span><span class="p">,</span><span class="mi">3</span><span class="p">])</span> |
| <span class="c1"># create a 2-dimensional array with a nested python list</span> |
| <span class="n">b</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">array</span><span class="p">([[</span><span class="mi">1</span><span class="p">,</span><span class="mi">2</span><span class="p">,</span><span class="mi">3</span><span class="p">],</span> <span class="p">[</span><span class="mi">2</span><span class="p">,</span><span class="mi">3</span><span class="p">,</span><span class="mi">4</span><span class="p">]])</span> |
| <span class="p">{</span><span class="s1">'a.shape'</span><span class="p">:</span><span class="n">a</span><span class="o">.</span><span class="n">shape</span><span class="p">,</span> <span class="s1">'b.shape'</span><span class="p">:</span><span class="n">b</span><span class="o">.</span><span class="n">shape</span><span class="p">}</span> |
| </pre></div> |
| </div> |
| <ul class="simple"> |
| <li>We can also create an MXNet NDArray from a <code class="docutils literal"><span class="pre">numpy.ndarray</span></code> object:</li> |
| </ul> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">numpy</span> <span class="kn">as</span> <span class="nn">np</span> |
| <span class="kn">import</span> <span class="nn">math</span> |
| <span class="n">c</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">arange</span><span class="p">(</span><span class="mi">15</span><span class="p">)</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="mi">3</span><span class="p">,</span><span class="mi">5</span><span class="p">)</span> |
| <span class="c1"># create a 2-dimensional array from a numpy.ndarray object</span> |
| <span class="n">a</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">array</span><span class="p">(</span><span class="n">c</span><span class="p">)</span> |
| <span class="p">{</span><span class="s1">'a.shape'</span><span class="p">:</span><span class="n">a</span><span class="o">.</span><span class="n">shape</span><span class="p">}</span> |
| </pre></div> |
| </div> |
| <p>We can specify the element type with the option <code class="docutils literal"><span class="pre">dtype</span></code>, which accepts a numpy |
| type. By default, <code class="docutils literal"><span class="pre">float32</span></code> is used:</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="c1"># float32 is used by default</span> |
| <span class="n">a</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">1</span><span class="p">,</span><span class="mi">2</span><span class="p">,</span><span class="mi">3</span><span class="p">])</span> |
| <span class="c1"># create an int32 array</span> |
| <span class="n">b</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">1</span><span class="p">,</span><span class="mi">2</span><span class="p">,</span><span class="mi">3</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="n">np</span><span class="o">.</span><span class="n">int32</span><span class="p">)</span> |
| <span class="c1"># create a 16-bit float array</span> |
| <span class="n">c</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mf">1.2</span><span class="p">,</span> <span class="mf">2.3</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="n">np</span><span class="o">.</span><span class="n">float16</span><span class="p">)</span> |
| <span class="p">(</span><span class="n">a</span><span class="o">.</span><span class="n">dtype</span><span class="p">,</span> <span class="n">b</span><span class="o">.</span><span class="n">dtype</span><span class="p">,</span> <span class="n">c</span><span class="o">.</span><span class="n">dtype</span><span class="p">)</span> |
| </pre></div> |
| </div> |
| <p>If we know the size of the desired NDArray, but not the element values, MXNet |
| offers several functions to create arrays with placeholder content:</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="c1"># create a 2-dimensional array full of zeros with shape (2,3)</span> |
| <span class="n">a</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">zeros</span><span class="p">((</span><span class="mi">2</span><span class="p">,</span><span class="mi">3</span><span class="p">))</span> |
| <span class="c1"># create a same shape array full of ones</span> |
| <span class="n">b</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">((</span><span class="mi">2</span><span class="p">,</span><span class="mi">3</span><span class="p">))</span> |
| <span class="c1"># create a same shape array with all elements set to 7</span> |
| <span class="n">c</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">full</span><span class="p">((</span><span class="mi">2</span><span class="p">,</span><span class="mi">3</span><span class="p">),</span> <span class="mi">7</span><span class="p">)</span> |
| <span class="c1"># create a same shape whose initial content is random and</span> |
| <span class="c1"># depends on the state of the memory</span> |
| <span class="n">d</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">empty</span><span class="p">((</span><span class="mi">2</span><span class="p">,</span><span class="mi">3</span><span class="p">))</span> |
| </pre></div> |
| </div> |
| </div> |
| <div class="section" id="printing-arrays"> |
| <span id="printing-arrays"></span><h2>Printing Arrays<a class="headerlink" href="#printing-arrays" title="Permalink to this headline">¶</a></h2> |
| <p>When inspecting the contents of an <code class="docutils literal"><span class="pre">NDArray</span></code>, it’s often convenient to first |
| extract its contents as a <code class="docutils literal"><span class="pre">numpy.ndarray</span></code> using the <code class="docutils literal"><span class="pre">asnumpy</span></code> function. Numpy |
| uses the following layout:</p> |
| <ul class="simple"> |
| <li>The last axis is printed from left to right,</li> |
| <li>The second-to-last is printed from top to bottom,</li> |
| <li>The rest are also printed from top to bottom, with each slice separated from |
| the next by an empty line.</li> |
| </ul> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">b</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">arange</span><span class="p">(</span><span class="mi">18</span><span class="p">)</span><span class="o">.</span><span class="n">reshape</span><span class="p">((</span><span class="mi">3</span><span class="p">,</span><span class="mi">2</span><span class="p">,</span><span class="mi">3</span><span class="p">))</span> |
| <span class="n">b</span><span class="o">.</span><span class="n">asnumpy</span><span class="p">()</span> |
| </pre></div> |
| </div> |
| </div> |
| <div class="section" id="basic-operations"> |
| <span id="basic-operations"></span><h2>Basic Operations<a class="headerlink" href="#basic-operations" title="Permalink to this headline">¶</a></h2> |
| <p>When applied to NDArrays, the standard arithmetic operators apply <em>elementwise</em> |
| calculations. The returned value is a new array whose content contains the |
| result.</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">a</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">((</span><span class="mi">2</span><span class="p">,</span><span class="mi">3</span><span class="p">))</span> |
| <span class="n">b</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">((</span><span class="mi">2</span><span class="p">,</span><span class="mi">3</span><span class="p">))</span> |
| <span class="c1"># elementwise plus</span> |
| <span class="n">c</span> <span class="o">=</span> <span class="n">a</span> <span class="o">+</span> <span class="n">b</span> |
| <span class="c1"># elementwise minus</span> |
| <span class="n">d</span> <span class="o">=</span> <span class="o">-</span> <span class="n">c</span> |
| <span class="c1"># elementwise pow and sin, and then transpose</span> |
| <span class="n">e</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">sin</span><span class="p">(</span><span class="n">c</span><span class="o">**</span><span class="mi">2</span><span class="p">)</span><span class="o">.</span><span class="n">T</span> |
| <span class="c1"># elementwise max</span> |
| <span class="n">f</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">maximum</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">c</span><span class="p">)</span> |
| <span class="n">f</span><span class="o">.</span><span class="n">asnumpy</span><span class="p">()</span> |
| </pre></div> |
| </div> |
| <p>As in <code class="docutils literal"><span class="pre">NumPy</span></code>, <code class="docutils literal"><span class="pre">*</span></code> represents element-wise multiplication. For matrix-matrix |
| multiplication, use <code class="docutils literal"><span class="pre">dot</span></code>.</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">a</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">arange</span><span class="p">(</span><span class="mi">4</span><span class="p">)</span><span class="o">.</span><span class="n">reshape</span><span class="p">((</span><span class="mi">2</span><span class="p">,</span><span class="mi">2</span><span class="p">))</span> |
| <span class="n">b</span> <span class="o">=</span> <span class="n">a</span> <span class="o">*</span> <span class="n">a</span> |
| <span class="n">c</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">dot</span><span class="p">(</span><span class="n">a</span><span class="p">,</span><span class="n">a</span><span class="p">)</span> |
| <span class="k">print</span><span class="p">(</span><span class="s2">"b: </span><span class="si">%s</span><span class="s2">, </span><span class="se">\n</span><span class="s2"> c: </span><span class="si">%s</span><span class="s2">"</span> <span class="o">%</span> <span class="p">(</span><span class="n">b</span><span class="o">.</span><span class="n">asnumpy</span><span class="p">(),</span> <span class="n">c</span><span class="o">.</span><span class="n">asnumpy</span><span class="p">()))</span> |
| </pre></div> |
| </div> |
| <p>The assignment operators such as <code class="docutils literal"><span class="pre">+=</span></code> and <code class="docutils literal"><span class="pre">*=</span></code> modify arrays in place, and thus |
| don’t allocate new memory to create a new array.</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">a</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">((</span><span class="mi">2</span><span class="p">,</span><span class="mi">2</span><span class="p">))</span> |
| <span class="n">b</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">(</span><span class="n">a</span><span class="o">.</span><span class="n">shape</span><span class="p">)</span> |
| <span class="n">b</span> <span class="o">+=</span> <span class="n">a</span> |
| <span class="n">b</span><span class="o">.</span><span class="n">asnumpy</span><span class="p">()</span> |
| </pre></div> |
| </div> |
| </div> |
| <div class="section" id="indexing-and-slicing"> |
| <span id="indexing-and-slicing"></span><h2>Indexing and Slicing<a class="headerlink" href="#indexing-and-slicing" title="Permalink to this headline">¶</a></h2> |
| <p>The slice operator <code class="docutils literal"><span class="pre">[]</span></code> applies on axis 0.</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">a</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">array</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">arange</span><span class="p">(</span><span class="mi">6</span><span class="p">)</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="mi">3</span><span class="p">,</span><span class="mi">2</span><span class="p">))</span> |
| <span class="n">a</span><span class="p">[</span><span class="mi">1</span><span class="p">:</span><span class="mi">2</span><span class="p">]</span> <span class="o">=</span> <span class="mi">1</span> |
| <span class="n">a</span><span class="p">[:]</span><span class="o">.</span><span class="n">asnumpy</span><span class="p">()</span> |
| </pre></div> |
| </div> |
| <p>We can also slice a particular axis with the method <code class="docutils literal"><span class="pre">slice_axis</span></code></p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">d</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">slice_axis</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">begin</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">end</span><span class="o">=</span><span class="mi">2</span><span class="p">)</span> |
| <span class="n">d</span><span class="o">.</span><span class="n">asnumpy</span><span class="p">()</span> |
| </pre></div> |
| </div> |
| </div> |
| <div class="section" id="shape-manipulation"> |
| <span id="shape-manipulation"></span><h2>Shape Manipulation<a class="headerlink" href="#shape-manipulation" title="Permalink to this headline">¶</a></h2> |
| <p>Using <code class="docutils literal"><span class="pre">reshape</span></code>, we can manipulate any arrays shape as long as the size remains |
| unchanged.</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">a</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">array</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">arange</span><span class="p">(</span><span class="mi">24</span><span class="p">))</span> |
| <span class="n">b</span> <span class="o">=</span> <span class="n">a</span><span class="o">.</span><span class="n">reshape</span><span class="p">((</span><span class="mi">2</span><span class="p">,</span><span class="mi">3</span><span class="p">,</span><span class="mi">4</span><span class="p">))</span> |
| <span class="n">b</span><span class="o">.</span><span class="n">asnumpy</span><span class="p">()</span> |
| </pre></div> |
| </div> |
| <p>The <code class="docutils literal"><span class="pre">concat</span></code> method stacks multiple arrays along the first axis. Their |
| shapes must be the same along the other axes.</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">a</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">((</span><span class="mi">2</span><span class="p">,</span><span class="mi">3</span><span class="p">))</span> |
| <span class="n">b</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">((</span><span class="mi">2</span><span class="p">,</span><span class="mi">3</span><span class="p">))</span><span class="o">*</span><span class="mi">2</span> |
| <span class="n">c</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">concat</span><span class="p">(</span><span class="n">a</span><span class="p">,</span><span class="n">b</span><span class="p">)</span> |
| <span class="n">c</span><span class="o">.</span><span class="n">asnumpy</span><span class="p">()</span> |
| </pre></div> |
| </div> |
| </div> |
| <div class="section" id="reduce"> |
| <span id="reduce"></span><h2>Reduce<a class="headerlink" href="#reduce" title="Permalink to this headline">¶</a></h2> |
| <p>Some functions, like <code class="docutils literal"><span class="pre">sum</span></code> and <code class="docutils literal"><span class="pre">mean</span></code> reduce arrays to scalars.</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">a</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">((</span><span class="mi">2</span><span class="p">,</span><span class="mi">3</span><span class="p">))</span> |
| <span class="n">b</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">sum</span><span class="p">(</span><span class="n">a</span><span class="p">)</span> |
| <span class="n">b</span><span class="o">.</span><span class="n">asnumpy</span><span class="p">()</span> |
| </pre></div> |
| </div> |
| <p>We can also reduce an array along a particular axis:</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">c</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">sum_axis</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span> |
| <span class="n">c</span><span class="o">.</span><span class="n">asnumpy</span><span class="p">()</span> |
| </pre></div> |
| </div> |
| </div> |
| <div class="section" id="broadcast"> |
| <span id="broadcast"></span><h2>Broadcast<a class="headerlink" href="#broadcast" title="Permalink to this headline">¶</a></h2> |
| <p>We can also broadcast an array. Broadcasting operations, duplicate an array’s |
| value along an axis with length 1. The following code broadcasts along axis 1:</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">a</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">array</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">arange</span><span class="p">(</span><span class="mi">6</span><span class="p">)</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="mi">6</span><span class="p">,</span><span class="mi">1</span><span class="p">))</span> |
| <span class="n">b</span> <span class="o">=</span> <span class="n">a</span><span class="o">.</span><span class="n">broadcast_to</span><span class="p">((</span><span class="mi">6</span><span class="p">,</span><span class="mi">4</span><span class="p">))</span> <span class="c1">#</span> |
| <span class="n">b</span><span class="o">.</span><span class="n">asnumpy</span><span class="p">()</span> |
| </pre></div> |
| </div> |
| <p>It’s possible to simultaneously broadcast along multiple axes. In the following example, we broadcast along axes 1 and 2:</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">c</span> <span class="o">=</span> <span class="n">a</span><span class="o">.</span><span class="n">reshape</span><span class="p">((</span><span class="mi">2</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">3</span><span class="p">))</span> |
| <span class="n">d</span> <span class="o">=</span> <span class="n">c</span><span class="o">.</span><span class="n">broadcast_to</span><span class="p">((</span><span class="mi">2</span><span class="p">,</span><span class="mi">2</span><span class="p">,</span><span class="mi">2</span><span class="p">,</span><span class="mi">3</span><span class="p">))</span> |
| <span class="n">d</span><span class="o">.</span><span class="n">asnumpy</span><span class="p">()</span> |
| </pre></div> |
| </div> |
| <p>Broadcasting can be applied automatically when executing some operations, |
| e.g. <code class="docutils literal"><span class="pre">*</span></code> and <code class="docutils literal"><span class="pre">+</span></code> on arrays of different shapes.</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">a</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">((</span><span class="mi">3</span><span class="p">,</span><span class="mi">2</span><span class="p">))</span> |
| <span class="n">b</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">((</span><span class="mi">1</span><span class="p">,</span><span class="mi">2</span><span class="p">))</span> |
| <span class="n">c</span> <span class="o">=</span> <span class="n">a</span> <span class="o">+</span> <span class="n">b</span> |
| <span class="n">c</span><span class="o">.</span><span class="n">asnumpy</span><span class="p">()</span> |
| </pre></div> |
| </div> |
| </div> |
| <div class="section" id="copies"> |
| <span id="copies"></span><h2>Copies<a class="headerlink" href="#copies" title="Permalink to this headline">¶</a></h2> |
| <p>When assigning an NDArray to another Python variable, we copy a reference to the |
| <em>same</em> NDArray. However, we often need to make a copy of the data, so that we |
| can manipulate the new array without overwriting the original values.</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">a</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">((</span><span class="mi">2</span><span class="p">,</span><span class="mi">2</span><span class="p">))</span> |
| <span class="n">b</span> <span class="o">=</span> <span class="n">a</span> |
| <span class="n">b</span> <span class="ow">is</span> <span class="n">a</span> <span class="c1"># will be True</span> |
| </pre></div> |
| </div> |
| <p>The <code class="docutils literal"><span class="pre">copy</span></code> method makes a deep copy of the array and its data:</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">b</span> <span class="o">=</span> <span class="n">a</span><span class="o">.</span><span class="n">copy</span><span class="p">()</span> |
| <span class="n">b</span> <span class="ow">is</span> <span class="n">a</span> <span class="c1"># will be False</span> |
| </pre></div> |
| </div> |
| <p>The above code allocates a new NDArray and then assigns to <em>b</em>. When we do not |
| want to allocate additional memory, we can use the <code class="docutils literal"><span class="pre">copyto</span></code> method or the slice |
| operator <code class="docutils literal"><span class="pre">[]</span></code> instead.</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">b</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">(</span><span class="n">a</span><span class="o">.</span><span class="n">shape</span><span class="p">)</span> |
| <span class="n">c</span> <span class="o">=</span> <span class="n">b</span> |
| <span class="n">c</span><span class="p">[:]</span> <span class="o">=</span> <span class="n">a</span> |
| <span class="n">d</span> <span class="o">=</span> <span class="n">b</span> |
| <span class="n">a</span><span class="o">.</span><span class="n">copyto</span><span class="p">(</span><span class="n">d</span><span class="p">)</span> |
| <span class="p">(</span><span class="n">c</span> <span class="ow">is</span> <span class="n">b</span><span class="p">,</span> <span class="n">d</span> <span class="ow">is</span> <span class="n">b</span><span class="p">)</span> <span class="c1"># Both will be True</span> |
| </pre></div> |
| </div> |
| </div> |
| <div class="section" id="advanced-topics"> |
| <span id="advanced-topics"></span><h2>Advanced Topics<a class="headerlink" href="#advanced-topics" title="Permalink to this headline">¶</a></h2> |
| <p>MXNet’s NDArray offers some advanced features that differentiate it from the |
| offerings you’ll find in most other libraries.</p> |
| <div class="section" id="gpu-support"> |
| <span id="gpu-support"></span><h3>GPU Support<a class="headerlink" href="#gpu-support" title="Permalink to this headline">¶</a></h3> |
| <p>By default, NDArray operators are executed on CPU. But with MXNet, it’s easy to |
| switch to another computation resource, such as GPU, when available. Each |
| NDArray’s device information is stored in <code class="docutils literal"><span class="pre">ndarray.context</span></code>. When MXNet is |
| compiled with flag <code class="docutils literal"><span class="pre">USE_CUDA=1</span></code> and the machine has at least one NVIDIA GPU, we |
| can cause all computations to run on GPU 0 by using context <code class="docutils literal"><span class="pre">mx.gpu(0)</span></code>, or |
| simply <code class="docutils literal"><span class="pre">mx.gpu()</span></code>. When we have access to two or more GPUs, the 2nd GPU is |
| represented by <code class="docutils literal"><span class="pre">mx.gpu(1)</span></code>, etc.</p> |
| <p><strong>Note</strong> In order to execute the following section on a cpu set gpu_device to mx.cpu().</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">gpu_device</span><span class="o">=</span><span class="n">mx</span><span class="o">.</span><span class="n">gpu</span><span class="p">()</span> <span class="c1"># Change this to mx.cpu() in absence of GPUs.</span> |
| |
| |
| <span class="k">def</span> <span class="nf">f</span><span class="p">():</span> |
| <span class="n">a</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">((</span><span class="mi">100</span><span class="p">,</span><span class="mi">100</span><span class="p">))</span> |
| <span class="n">b</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">((</span><span class="mi">100</span><span class="p">,</span><span class="mi">100</span><span class="p">))</span> |
| <span class="n">c</span> <span class="o">=</span> <span class="n">a</span> <span class="o">+</span> <span class="n">b</span> |
| <span class="k">print</span><span class="p">(</span><span class="n">c</span><span class="p">)</span> |
| <span class="c1"># in default mx.cpu() is used</span> |
| <span class="n">f</span><span class="p">()</span> |
| <span class="c1"># change the default context to the first GPU</span> |
| <span class="k">with</span> <span class="n">mx</span><span class="o">.</span><span class="n">Context</span><span class="p">(</span><span class="n">gpu_device</span><span class="p">):</span> |
| <span class="n">f</span><span class="p">()</span> |
| </pre></div> |
| </div> |
| <p>We can also explicitly specify the context when creating an array:</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">a</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">((</span><span class="mi">100</span><span class="p">,</span> <span class="mi">100</span><span class="p">),</span> <span class="n">gpu_device</span><span class="p">)</span> |
| <span class="n">a</span> |
| </pre></div> |
| </div> |
| <p>Currently, MXNet requires two arrays to sit on the same device for |
| computation. There are several methods for copying data between devices.</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">a</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">((</span><span class="mi">100</span><span class="p">,</span><span class="mi">100</span><span class="p">),</span> <span class="n">mx</span><span class="o">.</span><span class="n">cpu</span><span class="p">())</span> |
| <span class="n">b</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">((</span><span class="mi">100</span><span class="p">,</span><span class="mi">100</span><span class="p">),</span> <span class="n">gpu_device</span><span class="p">)</span> |
| <span class="n">c</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">((</span><span class="mi">100</span><span class="p">,</span><span class="mi">100</span><span class="p">),</span> <span class="n">gpu_device</span><span class="p">)</span> |
| <span class="n">a</span><span class="o">.</span><span class="n">copyto</span><span class="p">(</span><span class="n">c</span><span class="p">)</span> <span class="c1"># copy from CPU to GPU</span> |
| <span class="n">d</span> <span class="o">=</span> <span class="n">b</span> <span class="o">+</span> <span class="n">c</span> |
| <span class="n">e</span> <span class="o">=</span> <span class="n">b</span><span class="o">.</span><span class="n">as_in_context</span><span class="p">(</span><span class="n">c</span><span class="o">.</span><span class="n">context</span><span class="p">)</span> <span class="o">+</span> <span class="n">c</span> <span class="c1"># same to above</span> |
| <span class="p">{</span><span class="s1">'d'</span><span class="p">:</span><span class="n">d</span><span class="p">,</span> <span class="s1">'e'</span><span class="p">:</span><span class="n">e</span><span class="p">}</span> |
| </pre></div> |
| </div> |
| </div> |
| <div class="section" id="serialize-from-to-distributed-filesystems"> |
| <span id="serialize-from-to-distributed-filesystems"></span><h3>Serialize From/To (Distributed) Filesystems<a class="headerlink" href="#serialize-from-to-distributed-filesystems" title="Permalink to this headline">¶</a></h3> |
| <p>MXNet offers two simple ways to save (load) data to (from) disk. The first way |
| is to use <code class="docutils literal"><span class="pre">pickle</span></code>, as you might with any other Python objects. <code class="docutils literal"><span class="pre">NDArray</span></code> is |
| pickle-compatible.</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">pickle</span> <span class="kn">as</span> <span class="nn">pkl</span> |
| <span class="n">a</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">((</span><span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">))</span> |
| <span class="c1"># pack and then dump into disk</span> |
| <span class="n">data</span> <span class="o">=</span> <span class="n">pkl</span><span class="o">.</span><span class="n">dumps</span><span class="p">(</span><span class="n">a</span><span class="p">)</span> |
| <span class="n">pkl</span><span class="o">.</span><span class="n">dump</span><span class="p">(</span><span class="n">data</span><span class="p">,</span> <span class="nb">open</span><span class="p">(</span><span class="s1">'tmp.pickle'</span><span class="p">,</span> <span class="s1">'wb'</span><span class="p">))</span> |
| <span class="c1"># load from disk and then unpack</span> |
| <span class="n">data</span> <span class="o">=</span> <span class="n">pkl</span><span class="o">.</span><span class="n">load</span><span class="p">(</span><span class="nb">open</span><span class="p">(</span><span class="s1">'tmp.pickle'</span><span class="p">,</span> <span class="s1">'rb'</span><span class="p">))</span> |
| <span class="n">b</span> <span class="o">=</span> <span class="n">pkl</span><span class="o">.</span><span class="n">loads</span><span class="p">(</span><span class="n">data</span><span class="p">)</span> |
| <span class="n">b</span><span class="o">.</span><span class="n">asnumpy</span><span class="p">()</span> |
| </pre></div> |
| </div> |
| <p>The second way is to directly dump to disk in binary format by using the <code class="docutils literal"><span class="pre">save</span></code> |
| and <code class="docutils literal"><span class="pre">load</span></code> methods. We can save/load a single NDArray, or a list of NDArrays:</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">a</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">((</span><span class="mi">2</span><span class="p">,</span><span class="mi">3</span><span class="p">))</span> |
| <span class="n">b</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">((</span><span class="mi">5</span><span class="p">,</span><span class="mi">6</span><span class="p">))</span> |
| <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">save</span><span class="p">(</span><span class="s2">"temp.ndarray"</span><span class="p">,</span> <span class="p">[</span><span class="n">a</span><span class="p">,</span><span class="n">b</span><span class="p">])</span> |
| <span class="n">c</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">load</span><span class="p">(</span><span class="s2">"temp.ndarray"</span><span class="p">)</span> |
| <span class="n">c</span> |
| </pre></div> |
| </div> |
| <p>It’s also possible to save or load a dict of NDArrays in this fashion:</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">d</span> <span class="o">=</span> <span class="p">{</span><span class="s1">'a'</span><span class="p">:</span><span class="n">a</span><span class="p">,</span> <span class="s1">'b'</span><span class="p">:</span><span class="n">b</span><span class="p">}</span> |
| <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">save</span><span class="p">(</span><span class="s2">"temp.ndarray"</span><span class="p">,</span> <span class="n">d</span><span class="p">)</span> |
| <span class="n">c</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">load</span><span class="p">(</span><span class="s2">"temp.ndarray"</span><span class="p">)</span> |
| <span class="n">c</span> |
| </pre></div> |
| </div> |
| <p>The <code class="docutils literal"><span class="pre">load</span></code> and <code class="docutils literal"><span class="pre">save</span></code> methods are preferable to pickle in two respects</p> |
| <ol class="simple"> |
| <li>When using these methods, you can save data from within the Python interface |
| and then use it later from another language’s binding. For example, if we save |
| the data in Python:</li> |
| </ol> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">a</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">((</span><span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">))</span> |
| <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">save</span><span class="p">(</span><span class="s2">"temp.ndarray"</span><span class="p">,</span> <span class="p">[</span><span class="n">a</span><span class="p">,])</span> |
| </pre></div> |
| </div> |
| <p>we can later load it from R:</p> |
| <div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">a</span> <span class="o"><-</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">load</span><span class="p">(</span><span class="s2">"temp.ndarray"</span><span class="p">)</span> |
| <span class="k">as</span><span class="o">.</span><span class="n">array</span><span class="p">(</span><span class="n">a</span><span class="p">[[</span><span class="mi">1</span><span class="p">]])</span> |
| <span class="c1">## [,1] [,2] [,3]</span> |
| <span class="c1">## [1,] 1 1 1</span> |
| <span class="c1">## [2,] 1 1 1</span> |
| </pre></div> |
| </div> |
| <ol class="simple"> |
| <li>When a distributed filesystem such as Amazon S3 or Hadoop HDFS is set up, we |
| can directly save to and load from it.</li> |
| </ol> |
| <div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">save</span><span class="p">(</span><span class="s1">'s3://mybucket/mydata.ndarray'</span><span class="p">,</span> <span class="p">[</span><span class="n">a</span><span class="p">,])</span> <span class="c1"># if compiled with USE_S3=1</span> |
| <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">save</span><span class="p">(</span><span class="s1">'hdfs///users/myname/mydata.bin'</span><span class="p">,</span> <span class="p">[</span><span class="n">a</span><span class="p">,])</span> <span class="c1"># if compiled with USE_HDFS=1</span> |
| </pre></div> |
| </div> |
| </div> |
| <div class="section" id="lazy-evaluation-and-automatic-parallelization"> |
| <span id="lazy-evaluation-and-automatic-parallelization"></span><h3>Lazy Evaluation and Automatic Parallelization<a class="headerlink" href="#lazy-evaluation-and-automatic-parallelization" title="Permalink to this headline">¶</a></h3> |
| <p>MXNet uses lazy evaluation to achieve superior performance. When we run <code class="docutils literal"><span class="pre">a=b+1</span></code> |
| in Python, the Python thread just pushes this operation into the backend engine |
| and then returns. There are two benefits to this approach:</p> |
| <ol class="simple"> |
| <li>The main Python thread can continue to execute other computations once the |
| previous one is pushed. It is useful for frontend languages with heavy |
| overheads.</li> |
| <li>It is easier for the backend engine to explore further optimization, such as |
| auto parallelization.</li> |
| </ol> |
| <p>The backend engine can resolve data dependencies and schedule the computations |
| correctly. It is transparent to frontend users. We can explicitly call the |
| method <code class="docutils literal"><span class="pre">wait_to_read</span></code> on the result array to wait until the computation |
| finishes. Operations that copy data from an array to other packages, such as |
| <code class="docutils literal"><span class="pre">asnumpy</span></code>, will implicitly call <code class="docutils literal"><span class="pre">wait_to_read</span></code>.</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">time</span> |
| <span class="k">def</span> <span class="nf">do</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">n</span><span class="p">):</span> |
| <span class="sd">"""push computation into the backend engine"""</span> |
| <span class="k">return</span> <span class="p">[</span><span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">dot</span><span class="p">(</span><span class="n">x</span><span class="p">,</span><span class="n">x</span><span class="p">)</span> <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">n</span><span class="p">)]</span> |
| <span class="k">def</span> <span class="nf">wait</span><span class="p">(</span><span class="n">x</span><span class="p">):</span> |
| <span class="sd">"""wait until all results are available"""</span> |
| <span class="k">for</span> <span class="n">y</span> <span class="ow">in</span> <span class="n">x</span><span class="p">:</span> |
| <span class="n">y</span><span class="o">.</span><span class="n">wait_to_read</span><span class="p">()</span> |
| |
| <span class="n">tic</span> <span class="o">=</span> <span class="n">time</span><span class="o">.</span><span class="n">time</span><span class="p">()</span> |
| <span class="n">a</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">((</span><span class="mi">1000</span><span class="p">,</span><span class="mi">1000</span><span class="p">))</span> |
| <span class="n">b</span> <span class="o">=</span> <span class="n">do</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="mi">50</span><span class="p">)</span> |
| <span class="k">print</span><span class="p">(</span><span class="s1">'time for all computations are pushed into the backend engine:</span><span class="se">\n</span><span class="s1"> </span><span class="si">%f</span><span class="s1"> sec'</span> <span class="o">%</span> <span class="p">(</span><span class="n">time</span><span class="o">.</span><span class="n">time</span><span class="p">()</span> <span class="o">-</span> <span class="n">tic</span><span class="p">))</span> |
| <span class="n">wait</span><span class="p">(</span><span class="n">b</span><span class="p">)</span> |
| <span class="k">print</span><span class="p">(</span><span class="s1">'time for all computations are finished:</span><span class="se">\n</span><span class="s1"> </span><span class="si">%f</span><span class="s1"> sec'</span> <span class="o">%</span> <span class="p">(</span><span class="n">time</span><span class="o">.</span><span class="n">time</span><span class="p">()</span> <span class="o">-</span> <span class="n">tic</span><span class="p">))</span> |
| </pre></div> |
| </div> |
| <p>Besides analyzing data read and write dependencies, the backend engine is able |
| to schedule computations with no dependency in parallel. For example, in the |
| following code:</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">a</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">((</span><span class="mi">2</span><span class="p">,</span><span class="mi">3</span><span class="p">))</span> |
| <span class="n">b</span> <span class="o">=</span> <span class="n">a</span> <span class="o">+</span> <span class="mi">1</span> |
| <span class="n">c</span> <span class="o">=</span> <span class="n">a</span> <span class="o">+</span> <span class="mi">2</span> |
| <span class="n">d</span> <span class="o">=</span> <span class="n">b</span> <span class="o">*</span> <span class="n">c</span> |
| </pre></div> |
| </div> |
| <p>the second and third lines can be executed in parallel. The following example |
| first runs on CPU and then on GPU:</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">n</span> <span class="o">=</span> <span class="mi">10</span> |
| <span class="n">a</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">((</span><span class="mi">1000</span><span class="p">,</span><span class="mi">1000</span><span class="p">))</span> |
| <span class="n">b</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">((</span><span class="mi">6000</span><span class="p">,</span><span class="mi">6000</span><span class="p">),</span> <span class="n">gpu_device</span><span class="p">)</span> |
| <span class="n">tic</span> <span class="o">=</span> <span class="n">time</span><span class="o">.</span><span class="n">time</span><span class="p">()</span> |
| <span class="n">c</span> <span class="o">=</span> <span class="n">do</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">n</span><span class="p">)</span> |
| <span class="n">wait</span><span class="p">(</span><span class="n">c</span><span class="p">)</span> |
| <span class="k">print</span><span class="p">(</span><span class="s1">'Time to finish the CPU workload: </span><span class="si">%f</span><span class="s1"> sec'</span> <span class="o">%</span> <span class="p">(</span><span class="n">time</span><span class="o">.</span><span class="n">time</span><span class="p">()</span> <span class="o">-</span> <span class="n">tic</span><span class="p">))</span> |
| <span class="n">d</span> <span class="o">=</span> <span class="n">do</span><span class="p">(</span><span class="n">b</span><span class="p">,</span> <span class="n">n</span><span class="p">)</span> |
| <span class="n">wait</span><span class="p">(</span><span class="n">d</span><span class="p">)</span> |
| <span class="k">print</span><span class="p">(</span><span class="s1">'Time to finish both CPU/GPU workloads: </span><span class="si">%f</span><span class="s1"> sec'</span> <span class="o">%</span> <span class="p">(</span><span class="n">time</span><span class="o">.</span><span class="n">time</span><span class="p">()</span> <span class="o">-</span> <span class="n">tic</span><span class="p">))</span> |
| </pre></div> |
| </div> |
| <p>Now we issue all workloads at the same time. The backend engine will try to |
| parallel the CPU and GPU computations.</p> |
| <div class="highlight-python"><div class="highlight"><pre><span></span><span class="n">tic</span> <span class="o">=</span> <span class="n">time</span><span class="o">.</span><span class="n">time</span><span class="p">()</span> |
| <span class="n">c</span> <span class="o">=</span> <span class="n">do</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">n</span><span class="p">)</span> |
| <span class="n">d</span> <span class="o">=</span> <span class="n">do</span><span class="p">(</span><span class="n">b</span><span class="p">,</span> <span class="n">n</span><span class="p">)</span> |
| <span class="n">wait</span><span class="p">(</span><span class="n">c</span><span class="p">)</span> |
| <span class="n">wait</span><span class="p">(</span><span class="n">d</span><span class="p">)</span> |
| <span class="k">print</span><span class="p">(</span><span class="s1">'Both as finished in: </span><span class="si">%f</span><span class="s1"> sec'</span> <span class="o">%</span> <span class="p">(</span><span class="n">time</span><span class="o">.</span><span class="n">time</span><span class="p">()</span> <span class="o">-</span> <span class="n">tic</span><span class="p">))</span> |
| </pre></div> |
| </div> |
| <div class="btn-group" role="group"> |
| <div class="download-btn"><a download="ndarray.ipynb" href="ndarray.ipynb"><span class="glyphicon glyphicon-download-alt"></span> ndarray.ipynb</a></div></div></div> |
| </div> |
| </div> |
| </div> |
| </div> |
| <div aria-label="main navigation" class="sphinxsidebar rightsidebar" role="navigation"> |
| <div class="sphinxsidebarwrapper"> |
| <h3><a href="../../index.html">Table Of Contents</a></h3> |
| <ul> |
| <li><a class="reference internal" href="#">NDArray - Imperative tensor operations on CPU/GPU</a><ul> |
| <li><a class="reference internal" href="#prerequisites">Prerequisites</a></li> |
| <li><a class="reference internal" href="#array-creation">Array Creation</a></li> |
| <li><a class="reference internal" href="#printing-arrays">Printing Arrays</a></li> |
| <li><a class="reference internal" href="#basic-operations">Basic Operations</a></li> |
| <li><a class="reference internal" href="#indexing-and-slicing">Indexing and Slicing</a></li> |
| <li><a class="reference internal" href="#shape-manipulation">Shape Manipulation</a></li> |
| <li><a class="reference internal" href="#reduce">Reduce</a></li> |
| <li><a class="reference internal" href="#broadcast">Broadcast</a></li> |
| <li><a class="reference internal" href="#copies">Copies</a></li> |
| <li><a class="reference internal" href="#advanced-topics">Advanced Topics</a><ul> |
| <li><a class="reference internal" href="#gpu-support">GPU Support</a></li> |
| <li><a class="reference internal" href="#serialize-from-to-distributed-filesystems">Serialize From/To (Distributed) Filesystems</a></li> |
| <li><a class="reference internal" href="#lazy-evaluation-and-automatic-parallelization">Lazy Evaluation and Automatic Parallelization</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| </ul> |
| </div> |
| </div> |
| </div><div class="footer"> |
| <div class="section-disclaimer"> |
| <div class="container"> |
| <div> |
| <img height="60" src="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/image/apache_incubator_logo.png"/> |
| <p> |
| Apache MXNet is an effort undergoing incubation at The Apache Software Foundation (ASF), <strong>sponsored by the <i>Apache Incubator</i></strong>. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF. |
| </p> |
| <p> |
| "Copyright © 2017-2018, The Apache Software Foundation |
| Apache MXNet, MXNet, Apache, the Apache feather, and the Apache MXNet project logo are either registered trademarks or trademarks of the Apache Software Foundation." |
| </p> |
| </div> |
| </div> |
| </div> |
| </div> <!-- pagename != index --> |
| </div> |
| <script crossorigin="anonymous" integrity="sha384-0mSbJDEHialfmuBBQP6A4Qrprq5OVfW37PRR3j5ELqxss1yVqOtnepnHVP9aJ7xS" src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/js/bootstrap.min.js"></script> |
| <script src="../../_static/js/sidebar.js" type="text/javascript"></script> |
| <script src="../../_static/js/search.js" type="text/javascript"></script> |
| <script src="../../_static/js/navbar.js" type="text/javascript"></script> |
| <script src="../../_static/js/clipboard.min.js" type="text/javascript"></script> |
| <script src="../../_static/js/copycode.js" type="text/javascript"></script> |
| <script src="../../_static/js/page.js" type="text/javascript"></script> |
| <script src="../../_static/js/docversion.js" type="text/javascript"></script> |
| <script type="text/javascript"> |
| $('body').ready(function () { |
| $('body').css('visibility', 'visible'); |
| }); |
| </script> |
| </body> |
| </html> |