blob: a8cde3a3ab3db5df229f7ae168603c30076a3ee6 [file] [log] [blame]
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta charset="utf-8" />
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<meta http-equiv="x-ua-compatible" content="ie=edge">
<style>
.dropdown {
position: relative;
display: inline-block;
}
.dropdown-content {
display: none;
position: absolute;
background-color: #f9f9f9;
min-width: 160px;
box-shadow: 0px 8px 16px 0px rgba(0,0,0,0.2);
padding: 12px 16px;
z-index: 1;
text-align: left;
}
.dropdown:hover .dropdown-content {
display: block;
}
.dropdown-option:hover {
color: #FF4500;
}
.dropdown-option-active {
color: #FF4500;
font-weight: lighter;
}
.dropdown-option {
color: #000000;
font-weight: lighter;
}
.dropdown-header {
color: #FFFFFF;
display: inline-flex;
}
.dropdown-caret {
width: 18px;
height: 54px;
}
.dropdown-caret-path {
fill: #FFFFFF;
}
</style>
<title>gluon.loss &#8212; Apache MXNet documentation</title>
<link rel="stylesheet" href="../../../_static/basic.css" type="text/css" />
<link rel="stylesheet" href="../../../_static/pygments.css" type="text/css" />
<link rel="stylesheet" type="text/css" href="../../../_static/mxnet.css" />
<link rel="stylesheet" href="../../../_static/material-design-lite-1.3.0/material.blue-deep_orange.min.css" type="text/css" />
<link rel="stylesheet" href="../../../_static/sphinx_materialdesign_theme.css" type="text/css" />
<link rel="stylesheet" href="../../../_static/fontawesome/all.css" type="text/css" />
<link rel="stylesheet" href="../../../_static/fonts.css" type="text/css" />
<link rel="stylesheet" href="../../../_static/feedback.css" type="text/css" />
<script id="documentation_options" data-url_root="../../../" src="../../../_static/documentation_options.js"></script>
<script src="../../../_static/jquery.js"></script>
<script src="../../../_static/underscore.js"></script>
<script src="../../../_static/doctools.js"></script>
<script src="../../../_static/language_data.js"></script>
<script src="../../../_static/matomo_analytics.js"></script>
<script src="../../../_static/autodoc.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script async="async" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.5/latest.js?config=TeX-AMS-MML_HTMLorMML"></script>
<script type="text/x-mathjax-config">MathJax.Hub.Config({"tex2jax": {"inlineMath": [["$", "$"], ["\\(", "\\)"]], "processEscapes": true, "ignoreClass": "document", "processClass": "math|output_area"}})</script>
<script src="../../../_static/sphinx_materialdesign_theme.js"></script>
<link rel="shortcut icon" href="../../../_static/mxnet-icon.png"/>
<link rel="index" title="Index" href="../../../genindex.html" />
<link rel="search" title="Search" href="../../../search.html" />
<link rel="next" title="gluon.metric" href="../metric/index.html" />
<link rel="prev" title="vision.transforms" href="../data/vision/transforms/index.html" />
</head>
<body><header class="site-header" role="banner">
<div class="wrapper">
<a class="site-title" rel="author" href="/"><img
src="../../../_static/mxnet_logo.png" class="site-header-logo"></a>
<nav class="site-nav">
<input type="checkbox" id="nav-trigger" class="nav-trigger"/>
<label for="nav-trigger">
<span class="menu-icon">
<svg viewBox="0 0 18 15" width="18px" height="15px">
<path d="M18,1.484c0,0.82-0.665,1.484-1.484,1.484H1.484C0.665,2.969,0,2.304,0,1.484l0,0C0,0.665,0.665,0,1.484,0 h15.032C17.335,0,18,0.665,18,1.484L18,1.484z M18,7.516C18,8.335,17.335,9,16.516,9H1.484C0.665,9,0,8.335,0,7.516l0,0 c0-0.82,0.665-1.484,1.484-1.484h15.032C17.335,6.031,18,6.696,18,7.516L18,7.516z M18,13.516C18,14.335,17.335,15,16.516,15H1.484 C0.665,15,0,14.335,0,13.516l0,0c0-0.82,0.665-1.483,1.484-1.483h15.032C17.335,12.031,18,12.695,18,13.516L18,13.516z"/>
</svg>
</span>
</label>
<div class="trigger">
<a class="page-link" href="/get_started">Get Started</a>
<a class="page-link" href="/features">Features</a>
<a class="page-link" href="/ecosystem">Ecosystem</a>
<a class="page-link page-current" href="/api">Docs & Tutorials</a>
<a class="page-link" href="/trusted_by">Trusted By</a>
<a class="page-link" href="https://github.com/apache/mxnet">GitHub</a>
<div class="dropdown" style="min-width:100px">
<span class="dropdown-header">Apache
<svg class="dropdown-caret" viewBox="0 0 32 32" class="icon icon-caret-bottom" aria-hidden="true"><path class="dropdown-caret-path" d="M24 11.305l-7.997 11.39L8 11.305z"></path></svg>
</span>
<div class="dropdown-content" style="min-width:250px">
<a href="https://www.apache.org/foundation/">Apache Software Foundation</a>
<a href="https://incubator.apache.org/">Apache Incubator</a>
<a href="https://www.apache.org/licenses/">License</a>
<a href="/versions/1.9.1/api/faq/security.html">Security</a>
<a href="https://privacy.apache.org/policies/privacy-policy-public.html">Privacy</a>
<a href="https://www.apache.org/events/current-event">Events</a>
<a href="https://www.apache.org/foundation/sponsorship.html">Sponsorship</a>
<a href="https://www.apache.org/foundation/thanks.html">Thanks</a>
</div>
</div>
<div class="dropdown">
<span class="dropdown-header">master
<svg class="dropdown-caret" viewBox="0 0 32 32" class="icon icon-caret-bottom" aria-hidden="true"><path class="dropdown-caret-path" d="M24 11.305l-7.997 11.39L8 11.305z"></path></svg>
</span>
<div class="dropdown-content">
<a class="dropdown-option-active" href="/versions/master/">master</a><br>
<a class="dropdown-option" href="/versions/1.9.1/">1.9.1</a><br>
<a class="dropdown-option" href="/versions/1.8.0/">1.8.0</a><br>
<a class="dropdown-option" href="/versions/1.7.0/">1.7.0</a><br>
<a class="dropdown-option" href="/versions/1.6.0/">1.6.0</a><br>
<a class="dropdown-option" href="/versions/1.5.0/">1.5.0</a><br>
<a class="dropdown-option" href="/versions/1.4.1/">1.4.1</a><br>
<a class="dropdown-option" href="/versions/1.3.1/">1.3.1</a><br>
<a class="dropdown-option" href="/versions/1.2.1/">1.2.1</a><br>
<a class="dropdown-option" href="/versions/1.1.0/">1.1.0</a><br>
<a class="dropdown-option" href="/versions/1.0.0/">1.0.0</a><br>
<a class="dropdown-option" href="/versions/0.12.1/">0.12.1</a><br>
<a class="dropdown-option" href="/versions/0.11.0/">0.11.0</a>
</div>
</div>
</div>
</nav>
</div>
</header>
<div class="mdl-layout mdl-js-layout mdl-layout--fixed-header mdl-layout--fixed-drawer"><header class="mdl-layout__header mdl-layout__header--waterfall ">
<div class="mdl-layout__header-row">
<nav class="mdl-navigation breadcrumb">
<a class="mdl-navigation__link" href="../../index.html">Python API</a><i class="material-icons">navigate_next</i>
<a class="mdl-navigation__link" href="../index.html">mxnet.gluon</a><i class="material-icons">navigate_next</i>
<a class="mdl-navigation__link is-active">gluon.loss</a>
</nav>
<div class="mdl-layout-spacer"></div>
<nav class="mdl-navigation">
<form class="form-inline pull-sm-right" action="../../../search.html" method="get">
<div class="mdl-textfield mdl-js-textfield mdl-textfield--expandable mdl-textfield--floating-label mdl-textfield--align-right">
<label id="quick-search-icon" class="mdl-button mdl-js-button mdl-button--icon" for="waterfall-exp">
<i class="material-icons">search</i>
</label>
<div class="mdl-textfield__expandable-holder">
<input class="mdl-textfield__input" type="text" name="q" id="waterfall-exp" placeholder="Search" />
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</div>
</div>
<div class="mdl-tooltip" data-mdl-for="quick-search-icon">
Quick search
</div>
</form>
<a id="button-show-github"
href="https://github.com/apache/mxnet/edit/master/docs/python_docs/python/api/gluon/loss/index.rst" class="mdl-button mdl-js-button mdl-button--icon">
<i class="material-icons">edit</i>
</a>
<div class="mdl-tooltip" data-mdl-for="button-show-github">
Edit on Github
</div>
</nav>
</div>
<div class="mdl-layout__header-row header-links">
<div class="mdl-layout-spacer"></div>
<nav class="mdl-navigation">
</nav>
</div>
</header><header class="mdl-layout__drawer">
<div class="globaltoc">
<span class="mdl-layout-title toc">Table Of Contents</span>
<nav class="mdl-navigation">
<ul class="current">
<li class="toctree-l1"><a class="reference internal" href="../../../tutorials/index.html">Python Tutorials</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../../tutorials/getting-started/index.html">Getting Started</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/index.html">Crash Course</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/0-introduction.html">Introduction</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/1-nparray.html">Step 1: Manipulate data with NP on MXNet</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/2-create-nn.html">Step 2: Create a neural network</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/3-autograd.html">Step 3: Automatic differentiation with autograd</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/4-components.html">Step 4: Necessary components that are not in the network</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/5-datasets.html">Step 5: <code class="docutils literal notranslate"><span class="pre">Dataset</span></code>s and <code class="docutils literal notranslate"><span class="pre">DataLoader</span></code></a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/5-datasets.html#Using-own-data-with-included-Datasets">Using own data with included <code class="docutils literal notranslate"><span class="pre">Dataset</span></code>s</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/5-datasets.html#Using-your-own-data-with-custom-Datasets">Using your own data with custom <code class="docutils literal notranslate"><span class="pre">Dataset</span></code>s</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/5-datasets.html#New-in-MXNet-2.0:-faster-C++-backend-dataloaders">New in MXNet 2.0: faster C++ backend dataloaders</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/6-train-nn.html">Step 6: Train a Neural Network</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/7-use-gpus.html">Step 7: Load and Run a NN using GPU</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../tutorials/getting-started/to-mxnet/index.html">Moving to MXNet from Other Frameworks</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/to-mxnet/pytorch.html">PyTorch vs Apache MXNet</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../tutorials/getting-started/gluon_from_experiment_to_deployment.html">Gluon: from experiment to deployment</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../tutorials/getting-started/gluon_migration_guide.html">Gluon2.0: Migration Guide</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../tutorials/getting-started/logistic_regression_explained.html">Logistic regression explained</a></li>
<li class="toctree-l3"><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/image/mnist.html">MNIST</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../../tutorials/packages/index.html">Packages</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/autograd/index.html">Automatic Differentiation</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/gluon/index.html">Gluon</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/index.html">Blocks</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/custom-layer.html">Custom Layers</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/hybridize.html">Hybridize</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/init.html">Initialization</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/naming.html">Parameter and Block Naming</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/nn.html">Layers and Blocks</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/parameters.html">Parameter Management</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/activations/activations.html">Activation Blocks</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/data/index.html">Data Tutorials</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/data_augmentation.html">Image Augmentation</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/datasets.html">Gluon <code class="docutils literal notranslate"><span class="pre">Dataset</span></code>s and <code class="docutils literal notranslate"><span class="pre">DataLoader</span></code></a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/datasets.html#Using-own-data-with-included-Datasets">Using own data with included <code class="docutils literal notranslate"><span class="pre">Dataset</span></code>s</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/datasets.html#Using-own-data-with-custom-Datasets">Using own data with custom <code class="docutils literal notranslate"><span class="pre">Dataset</span></code>s</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/datasets.html#Appendix:-Upgrading-from-Module-DataIter-to-Gluon-DataLoader">Appendix: Upgrading from Module <code class="docutils literal notranslate"><span class="pre">DataIter</span></code> to Gluon <code class="docutils literal notranslate"><span class="pre">DataLoader</span></code></a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/image/index.html">Image Tutorials</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/image/info_gan.html">Image similarity search with InfoGAN</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/image/mnist.html">Handwritten Digit Recognition</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/loss/index.html">Losses</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/loss/custom-loss.html">Custom Loss Blocks</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/loss/kl_divergence.html">Kullback-Leibler (KL) Divergence</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/loss/loss.html">Loss functions</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/text/index.html">Text Tutorials</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/text/gnmt.html">Google Neural Machine Translation</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/text/transformer.html">Machine Translation with Transformer</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/training/index.html">Training</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/training/fit_api_tutorial.html">MXNet Gluon Fit API</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/training/trainer.html">Trainer</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/training/learning_rates/index.html">Learning Rates</a><ul>
<li class="toctree-l6"><a class="reference internal" href="../../../tutorials/packages/gluon/training/learning_rates/learning_rate_finder.html">Learning Rate Finder</a></li>
<li class="toctree-l6"><a class="reference internal" href="../../../tutorials/packages/gluon/training/learning_rates/learning_rate_schedules.html">Learning Rate Schedules</a></li>
<li class="toctree-l6"><a class="reference internal" href="../../../tutorials/packages/gluon/training/learning_rates/learning_rate_schedules_advanced.html">Advanced Learning Rate Schedules</a></li>
</ul>
</li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/training/normalization/index.html">Normalization Blocks</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/kvstore/index.html">KVStore</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/kvstore/kvstore.html">Distributed Key-Value Store</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/legacy/index.html">Legacy</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/legacy/ndarray/index.html">NDArray</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/legacy/ndarray/01-ndarray-intro.html">An Intro: Manipulate Data the MXNet Way with NDArray</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/legacy/ndarray/02-ndarray-operations.html">NDArray Operations</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/legacy/ndarray/03-ndarray-contexts.html">NDArray Contexts</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/legacy/ndarray/gotchas_numpy_in_mxnet.html">Gotchas using NumPy in Apache MXNet</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/legacy/ndarray/sparse/index.html">Tutorials</a><ul>
<li class="toctree-l6"><a class="reference internal" href="../../../tutorials/packages/legacy/ndarray/sparse/csr.html">CSRNDArray - NDArray in Compressed Sparse Row Storage Format</a></li>
<li class="toctree-l6"><a class="reference internal" href="../../../tutorials/packages/legacy/ndarray/sparse/row_sparse.html">RowSparseNDArray - NDArray for Sparse Gradient Updates</a></li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/np/index.html">What is NP on MXNet</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/np/cheat-sheet.html">The NP on MXNet cheat sheet</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/np/np-vs-numpy.html">Differences between NP on MXNet and NumPy</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/onnx/index.html">ONNX</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/onnx/fine_tuning_gluon.html">Fine-tuning an ONNX model</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/onnx/inference_on_onnx_model.html">Running inference on MXNet/Gluon from an ONNX model</a></li>
<li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/deploy/export/onnx.html">Export ONNX Models</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/optimizer/index.html">Optimizers</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/viz/index.html">Visualization</a><ul>
<li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/faq/visualize_graph">Visualize networks</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../../tutorials/performance/index.html">Performance</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../../tutorials/performance/compression/index.html">Compression</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/compression/int8.html">Deploy with int-8</a></li>
<li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/faq/float16">Float16</a></li>
<li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/faq/gradient_compression">Gradient Compression</a></li>
<li class="toctree-l4"><a class="reference external" href="https://gluon-cv.mxnet.io/build/examples_deployment/int8_inference.html">GluonCV with Quantized Models</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../tutorials/performance/backend/index.html">Accelerated Backend Tools</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/backend/dnnl/index.html">oneDNN</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/performance/backend/dnnl/dnnl_readme.html">Install MXNet with oneDNN</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/performance/backend/dnnl/dnnl_quantization.html">oneDNN Quantization</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/performance/backend/dnnl/dnnl_quantization_inc.html">Improving accuracy with Intel® Neural Compressor</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/backend/tvm.html">Use TVM</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/backend/profiler.html">Profiling MXNet Models</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/backend/amp.html">Using AMP: Automatic Mixed Precision</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../../tutorials/deploy/index.html">Deployment</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../../tutorials/deploy/export/index.html">Export</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/export/onnx.html">Exporting to ONNX format</a></li>
<li class="toctree-l4"><a class="reference external" href="https://gluon-cv.mxnet.io/build/examples_deployment/export_network.html">Export Gluon CV Models</a></li>
<li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Save / Load Parameters</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../tutorials/deploy/inference/index.html">Inference</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/inference/cpp.html">Deploy into C++</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/inference/image_classification_jetson.html">Image Classication using pretrained ResNet-50 model on Jetson module</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../tutorials/deploy/run-on-aws/index.html">Run on AWS</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/run-on-aws/use_ec2.html">Run on an EC2 Instance</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/run-on-aws/use_sagemaker.html">Run on Amazon SageMaker</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/run-on-aws/cloud.html">MXNet on the Cloud</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../../tutorials/extend/index.html">Extend</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../../tutorials/extend/customop.html">Custom Numpy Operators</a></li>
<li class="toctree-l3"><a class="reference external" href="https://mxnet.apache.org/api/faq/new_op">New Operator Creation</a></li>
<li class="toctree-l3"><a class="reference external" href="https://mxnet.apache.org/api/faq/add_op_in_backend">New Operator in MXNet Backend</a></li>
<li class="toctree-l3"><a class="reference external" href="https://mxnet.apache.org/api/faq/using_rtc">Using RTC for CUDA kernels</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 current"><a class="reference internal" href="../../index.html">Python API</a><ul class="current">
<li class="toctree-l2"><a class="reference internal" href="../../np/index.html">mxnet.np</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../np/arrays.html">Array objects</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../np/arrays.ndarray.html">The N-dimensional array (<code class="xref py py-class docutils literal notranslate"><span class="pre">ndarray</span></code>)</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../np/arrays.indexing.html">Indexing</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../np/routines.html">Routines</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../np/routines.array-creation.html">Array creation routines</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.eye.html">mxnet.np.eye</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.empty.html">mxnet.np.empty</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.full.html">mxnet.np.full</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.identity.html">mxnet.np.identity</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.ones.html">mxnet.np.ones</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.ones_like.html">mxnet.np.ones_like</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.zeros.html">mxnet.np.zeros</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.zeros_like.html">mxnet.np.zeros_like</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.array.html">mxnet.np.array</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.copy.html">mxnet.np.copy</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.arange.html">mxnet.np.arange</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linspace.html">mxnet.np.linspace</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.logspace.html">mxnet.np.logspace</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.meshgrid.html">mxnet.np.meshgrid</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.tril.html">mxnet.np.tril</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../np/routines.array-manipulation.html">Array manipulation routines</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.reshape.html">mxnet.np.reshape</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.ravel.html">mxnet.np.ravel</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.ndarray.flatten.html">mxnet.np.ndarray.flatten</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.swapaxes.html">mxnet.np.swapaxes</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.ndarray.T.html">mxnet.np.ndarray.T</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.transpose.html">mxnet.np.transpose</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.moveaxis.html">mxnet.np.moveaxis</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.rollaxis.html">mxnet.np.rollaxis</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.expand_dims.html">mxnet.np.expand_dims</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.squeeze.html">mxnet.np.squeeze</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.broadcast_to.html">mxnet.np.broadcast_to</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.broadcast_arrays.html">mxnet.np.broadcast_arrays</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.atleast_1d.html">mxnet.np.atleast_1d</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.atleast_2d.html">mxnet.np.atleast_2d</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.atleast_3d.html">mxnet.np.atleast_3d</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.concatenate.html">mxnet.np.concatenate</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.stack.html">mxnet.np.stack</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.dstack.html">mxnet.np.dstack</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.vstack.html">mxnet.np.vstack</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.column_stack.html">mxnet.np.column_stack</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.hstack.html">mxnet.np.hstack</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.split.html">mxnet.np.split</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.hsplit.html">mxnet.np.hsplit</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.vsplit.html">mxnet.np.vsplit</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.array_split.html">mxnet.np.array_split</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.dsplit.html">mxnet.np.dsplit</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.tile.html">mxnet.np.tile</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.repeat.html">mxnet.np.repeat</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.unique.html">mxnet.np.unique</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.delete.html">mxnet.np.delete</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.insert.html">mxnet.np.insert</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.append.html">mxnet.np.append</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.resize.html">mxnet.np.resize</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.trim_zeros.html">mxnet.np.trim_zeros</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.reshape.html">mxnet.np.reshape</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.flip.html">mxnet.np.flip</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.roll.html">mxnet.np.roll</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.rot90.html">mxnet.np.rot90</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.fliplr.html">mxnet.np.fliplr</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.flipud.html">mxnet.np.flipud</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../np/routines.io.html">Input and output</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.genfromtxt.html">mxnet.np.genfromtxt</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.ndarray.tolist.html">mxnet.np.ndarray.tolist</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.set_printoptions.html">mxnet.np.set_printoptions</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../np/routines.linalg.html">Linear algebra (<code class="xref py py-mod docutils literal notranslate"><span class="pre">numpy.linalg</span></code>)</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.dot.html">mxnet.np.dot</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.vdot.html">mxnet.np.vdot</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.inner.html">mxnet.np.inner</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.outer.html">mxnet.np.outer</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.tensordot.html">mxnet.np.tensordot</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.einsum.html">mxnet.np.einsum</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.multi_dot.html">mxnet.np.linalg.multi_dot</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.matmul.html">mxnet.np.matmul</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.matrix_power.html">mxnet.np.linalg.matrix_power</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.kron.html">mxnet.np.kron</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.svd.html">mxnet.np.linalg.svd</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.cholesky.html">mxnet.np.linalg.cholesky</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.qr.html">mxnet.np.linalg.qr</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.eig.html">mxnet.np.linalg.eig</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.eigh.html">mxnet.np.linalg.eigh</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.eigvals.html">mxnet.np.linalg.eigvals</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.eigvalsh.html">mxnet.np.linalg.eigvalsh</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.norm.html">mxnet.np.linalg.norm</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.trace.html">mxnet.np.trace</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.cond.html">mxnet.np.linalg.cond</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.det.html">mxnet.np.linalg.det</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.matrix_rank.html">mxnet.np.linalg.matrix_rank</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.slogdet.html">mxnet.np.linalg.slogdet</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.solve.html">mxnet.np.linalg.solve</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.tensorsolve.html">mxnet.np.linalg.tensorsolve</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.lstsq.html">mxnet.np.linalg.lstsq</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.inv.html">mxnet.np.linalg.inv</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.pinv.html">mxnet.np.linalg.pinv</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.tensorinv.html">mxnet.np.linalg.tensorinv</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../np/routines.math.html">Mathematical functions</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.sin.html">mxnet.np.sin</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.cos.html">mxnet.np.cos</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.tan.html">mxnet.np.tan</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.arcsin.html">mxnet.np.arcsin</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.arccos.html">mxnet.np.arccos</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.arctan.html">mxnet.np.arctan</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.degrees.html">mxnet.np.degrees</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.radians.html">mxnet.np.radians</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.hypot.html">mxnet.np.hypot</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.arctan2.html">mxnet.np.arctan2</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.deg2rad.html">mxnet.np.deg2rad</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.rad2deg.html">mxnet.np.rad2deg</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.unwrap.html">mxnet.np.unwrap</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.sinh.html">mxnet.np.sinh</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.cosh.html">mxnet.np.cosh</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.tanh.html">mxnet.np.tanh</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.arcsinh.html">mxnet.np.arcsinh</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.arccosh.html">mxnet.np.arccosh</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.arctanh.html">mxnet.np.arctanh</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.rint.html">mxnet.np.rint</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.fix.html">mxnet.np.fix</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.floor.html">mxnet.np.floor</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.ceil.html">mxnet.np.ceil</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.trunc.html">mxnet.np.trunc</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.around.html">mxnet.np.around</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.round_.html">mxnet.np.round_</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.sum.html">mxnet.np.sum</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.prod.html">mxnet.np.prod</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.cumsum.html">mxnet.np.cumsum</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.nanprod.html">mxnet.np.nanprod</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.nansum.html">mxnet.np.nansum</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.cumprod.html">mxnet.np.cumprod</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.nancumprod.html">mxnet.np.nancumprod</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.nancumsum.html">mxnet.np.nancumsum</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.diff.html">mxnet.np.diff</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.ediff1d.html">mxnet.np.ediff1d</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.cross.html">mxnet.np.cross</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.trapz.html">mxnet.np.trapz</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.exp.html">mxnet.np.exp</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.expm1.html">mxnet.np.expm1</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.log.html">mxnet.np.log</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.log10.html">mxnet.np.log10</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.log2.html">mxnet.np.log2</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.log1p.html">mxnet.np.log1p</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.logaddexp.html">mxnet.np.logaddexp</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.i0.html">mxnet.np.i0</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.ldexp.html">mxnet.np.ldexp</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.signbit.html">mxnet.np.signbit</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.copysign.html">mxnet.np.copysign</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.frexp.html">mxnet.np.frexp</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.spacing.html">mxnet.np.spacing</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.lcm.html">mxnet.np.lcm</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.gcd.html">mxnet.np.gcd</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.add.html">mxnet.np.add</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.reciprocal.html">mxnet.np.reciprocal</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.negative.html">mxnet.np.negative</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.divide.html">mxnet.np.divide</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.power.html">mxnet.np.power</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.subtract.html">mxnet.np.subtract</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.mod.html">mxnet.np.mod</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.multiply.html">mxnet.np.multiply</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.true_divide.html">mxnet.np.true_divide</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.remainder.html">mxnet.np.remainder</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.positive.html">mxnet.np.positive</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.float_power.html">mxnet.np.float_power</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.fmod.html">mxnet.np.fmod</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.modf.html">mxnet.np.modf</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.divmod.html">mxnet.np.divmod</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.floor_divide.html">mxnet.np.floor_divide</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.clip.html">mxnet.np.clip</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.sqrt.html">mxnet.np.sqrt</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.cbrt.html">mxnet.np.cbrt</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.square.html">mxnet.np.square</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.absolute.html">mxnet.np.absolute</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.sign.html">mxnet.np.sign</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.maximum.html">mxnet.np.maximum</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.minimum.html">mxnet.np.minimum</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.fabs.html">mxnet.np.fabs</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.heaviside.html">mxnet.np.heaviside</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.fmax.html">mxnet.np.fmax</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.fmin.html">mxnet.np.fmin</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.nan_to_num.html">mxnet.np.nan_to_num</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.interp.html">mxnet.np.interp</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../np/random/index.html">np.random</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.choice.html">mxnet.np.random.choice</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.shuffle.html">mxnet.np.random.shuffle</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.normal.html">mxnet.np.random.normal</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.uniform.html">mxnet.np.random.uniform</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.rand.html">mxnet.np.random.rand</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.randint.html">mxnet.np.random.randint</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.beta.html">mxnet.np.random.beta</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.chisquare.html">mxnet.np.random.chisquare</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.exponential.html">mxnet.np.random.exponential</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.f.html">mxnet.np.random.f</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.gamma.html">mxnet.np.random.gamma</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.gumbel.html">mxnet.np.random.gumbel</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.laplace.html">mxnet.np.random.laplace</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.logistic.html">mxnet.np.random.logistic</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.lognormal.html">mxnet.np.random.lognormal</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.multinomial.html">mxnet.np.random.multinomial</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.multivariate_normal.html">mxnet.np.random.multivariate_normal</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.pareto.html">mxnet.np.random.pareto</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.power.html">mxnet.np.random.power</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.rayleigh.html">mxnet.np.random.rayleigh</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.weibull.html">mxnet.np.random.weibull</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../np/routines.sort.html">Sorting, searching, and counting</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.ndarray.sort.html">mxnet.np.ndarray.sort</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.sort.html">mxnet.np.sort</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.lexsort.html">mxnet.np.lexsort</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.argsort.html">mxnet.np.argsort</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.msort.html">mxnet.np.msort</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.partition.html">mxnet.np.partition</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.argpartition.html">mxnet.np.argpartition</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.argmax.html">mxnet.np.argmax</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.argmin.html">mxnet.np.argmin</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.nanargmax.html">mxnet.np.nanargmax</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.nanargmin.html">mxnet.np.nanargmin</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.argwhere.html">mxnet.np.argwhere</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.nonzero.html">mxnet.np.nonzero</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.flatnonzero.html">mxnet.np.flatnonzero</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.where.html">mxnet.np.where</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.searchsorted.html">mxnet.np.searchsorted</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.extract.html">mxnet.np.extract</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.count_nonzero.html">mxnet.np.count_nonzero</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../np/routines.statistics.html">Statistics</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.min.html">mxnet.np.min</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.max.html">mxnet.np.max</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.amin.html">mxnet.np.amin</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.amax.html">mxnet.np.amax</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.nanmin.html">mxnet.np.nanmin</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.nanmax.html">mxnet.np.nanmax</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.ptp.html">mxnet.np.ptp</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.percentile.html">mxnet.np.percentile</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.nanpercentile.html">mxnet.np.nanpercentile</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.quantile.html">mxnet.np.quantile</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.nanquantile.html">mxnet.np.nanquantile</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.mean.html">mxnet.np.mean</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.std.html">mxnet.np.std</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.var.html">mxnet.np.var</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.median.html">mxnet.np.median</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.average.html">mxnet.np.average</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.nanmedian.html">mxnet.np.nanmedian</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.nanstd.html">mxnet.np.nanstd</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.nanvar.html">mxnet.np.nanvar</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.corrcoef.html">mxnet.np.corrcoef</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.correlate.html">mxnet.np.correlate</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.cov.html">mxnet.np.cov</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.histogram.html">mxnet.np.histogram</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.histogram2d.html">mxnet.np.histogram2d</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.histogramdd.html">mxnet.np.histogramdd</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.bincount.html">mxnet.np.bincount</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.histogram_bin_edges.html">mxnet.np.histogram_bin_edges</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.digitize.html">mxnet.np.digitize</a></li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../npx/index.html">NPX: NumPy Neural Network Extension</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.set_np.html">mxnet.npx.set_np</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.reset_np.html">mxnet.npx.reset_np</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.cpu.html">mxnet.npx.cpu</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.cpu_pinned.html">mxnet.npx.cpu_pinned</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.gpu.html">mxnet.npx.gpu</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.gpu_memory_info.html">mxnet.npx.gpu_memory_info</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.current_device.html">mxnet.npx.current_device</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.num_gpus.html">mxnet.npx.num_gpus</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.activation.html">mxnet.npx.activation</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.batch_norm.html">mxnet.npx.batch_norm</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.convolution.html">mxnet.npx.convolution</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.dropout.html">mxnet.npx.dropout</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.embedding.html">mxnet.npx.embedding</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.fully_connected.html">mxnet.npx.fully_connected</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.layer_norm.html">mxnet.npx.layer_norm</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.pooling.html">mxnet.npx.pooling</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.rnn.html">mxnet.npx.rnn</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.leaky_relu.html">mxnet.npx.leaky_relu</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.multibox_detection.html">mxnet.npx.multibox_detection</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.multibox_prior.html">mxnet.npx.multibox_prior</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.multibox_target.html">mxnet.npx.multibox_target</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.roi_pooling.html">mxnet.npx.roi_pooling</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.sigmoid.html">mxnet.npx.sigmoid</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.relu.html">mxnet.npx.relu</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.smooth_l1.html">mxnet.npx.smooth_l1</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.softmax.html">mxnet.npx.softmax</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.log_softmax.html">mxnet.npx.log_softmax</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.topk.html">mxnet.npx.topk</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.waitall.html">mxnet.npx.waitall</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.load.html">mxnet.npx.load</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.save.html">mxnet.npx.save</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.one_hot.html">mxnet.npx.one_hot</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.pick.html">mxnet.npx.pick</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.reshape_like.html">mxnet.npx.reshape_like</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.batch_flatten.html">mxnet.npx.batch_flatten</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.batch_dot.html">mxnet.npx.batch_dot</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.gamma.html">mxnet.npx.gamma</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.sequence_mask.html">mxnet.npx.sequence_mask</a></li>
</ul>
</li>
<li class="toctree-l2 current"><a class="reference internal" href="../index.html">mxnet.gluon</a><ul class="current">
<li class="toctree-l3"><a class="reference internal" href="../block.html">gluon.Block</a></li>
<li class="toctree-l3"><a class="reference internal" href="../hybrid_block.html">gluon.HybridBlock</a></li>
<li class="toctree-l3"><a class="reference internal" href="../symbol_block.html">gluon.SymbolBlock</a></li>
<li class="toctree-l3"><a class="reference internal" href="../constant.html">gluon.Constant</a></li>
<li class="toctree-l3"><a class="reference internal" href="../parameter.html">gluon.Parameter</a></li>
<li class="toctree-l3"><a class="reference internal" href="../trainer.html">gluon.Trainer</a></li>
<li class="toctree-l3"><a class="reference internal" href="../contrib/index.html">gluon.contrib</a></li>
<li class="toctree-l3"><a class="reference internal" href="../data/index.html">gluon.data</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../data/vision/index.html">data.vision</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../data/vision/datasets/index.html">vision.datasets</a></li>
<li class="toctree-l5"><a class="reference internal" href="../data/vision/transforms/index.html">vision.transforms</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l3 current"><a class="current reference internal" href="#">gluon.loss</a></li>
<li class="toctree-l3"><a class="reference internal" href="../metric/index.html">gluon.metric</a></li>
<li class="toctree-l3"><a class="reference internal" href="../model_zoo/index.html">gluon.model_zoo.vision</a></li>
<li class="toctree-l3"><a class="reference internal" href="../nn/index.html">gluon.nn</a></li>
<li class="toctree-l3"><a class="reference internal" href="../rnn/index.html">gluon.rnn</a></li>
<li class="toctree-l3"><a class="reference internal" href="../utils/index.html">gluon.utils</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../autograd/index.html">mxnet.autograd</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../initializer/index.html">mxnet.initializer</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../optimizer/index.html">mxnet.optimizer</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../lr_scheduler/index.html">mxnet.lr_scheduler</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../kvstore/index.html">KVStore: Communication for Distributed Training</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../kvstore/index.html#horovod">Horovod</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../kvstore/generated/mxnet.kvstore.Horovod.html">mxnet.kvstore.Horovod</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../kvstore/index.html#byteps">BytePS</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../kvstore/generated/mxnet.kvstore.BytePS.html">mxnet.kvstore.BytePS</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../kvstore/index.html#kvstore-interface">KVStore Interface</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../kvstore/generated/mxnet.kvstore.KVStore.html">mxnet.kvstore.KVStore</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../kvstore/generated/mxnet.kvstore.KVStoreBase.html">mxnet.kvstore.KVStoreBase</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../kvstore/generated/mxnet.kvstore.KVStoreServer.html">mxnet.kvstore.KVStoreServer</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../contrib/index.html">mxnet.contrib</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../contrib/io/index.html">contrib.io</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../contrib/ndarray/index.html">contrib.ndarray</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../contrib/onnx/index.html">contrib.onnx</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../contrib/quantization/index.html">contrib.quantization</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../contrib/symbol/index.html">contrib.symbol</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../contrib/tensorboard/index.html">contrib.tensorboard</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../contrib/tensorrt/index.html">contrib.tensorrt</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../contrib/text/index.html">contrib.text</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../legacy/index.html">Legacy</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../legacy/callback/index.html">mxnet.callback</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../legacy/image/index.html">mxnet.image</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../legacy/io/index.html">mxnet.io</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../legacy/ndarray/index.html">mxnet.ndarray</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../legacy/ndarray/ndarray.html">ndarray</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../legacy/ndarray/contrib/index.html">ndarray.contrib</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../legacy/ndarray/image/index.html">ndarray.image</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../legacy/ndarray/linalg/index.html">ndarray.linalg</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../legacy/ndarray/op/index.html">ndarray.op</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../legacy/ndarray/random/index.html">ndarray.random</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../legacy/ndarray/register/index.html">ndarray.register</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../legacy/ndarray/sparse/index.html">ndarray.sparse</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../legacy/ndarray/utils/index.html">ndarray.utils</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../legacy/recordio/index.html">mxnet.recordio</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../legacy/symbol/index.html">mxnet.symbol</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../legacy/symbol/symbol.html">symbol</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../legacy/symbol/contrib/index.html">symbol.contrib</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../legacy/symbol/image/index.html">symbol.image</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../legacy/symbol/linalg/index.html">symbol.linalg</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../legacy/symbol/op/index.html">symbol.op</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../legacy/symbol/random/index.html">symbol.random</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../legacy/symbol/register/index.html">symbol.register</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../legacy/symbol/sparse/index.html">symbol.sparse</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../legacy/visualization/index.html">mxnet.visualization</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../device/index.html">mxnet.device</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../engine/index.html">mxnet.engine</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../executor/index.html">mxnet.executor</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../kvstore_server/index.html">mxnet.kvstore_server</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../profiler/index.html">mxnet.profiler</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../rtc/index.html">mxnet.rtc</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../runtime/index.html">mxnet.runtime</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../runtime/generated/mxnet.runtime.Feature.html">mxnet.runtime.Feature</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../runtime/generated/mxnet.runtime.Features.html">mxnet.runtime.Features</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../runtime/generated/mxnet.runtime.feature_list.html">mxnet.runtime.feature_list</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../test_utils/index.html">mxnet.test_utils</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../util/index.html">mxnet.util</a></li>
</ul>
</li>
</ul>
</nav>
</div>
</header>
<main class="mdl-layout__content" tabIndex="0">
<header class="mdl-layout__drawer">
<div class="globaltoc">
<span class="mdl-layout-title toc">Table Of Contents</span>
<nav class="mdl-navigation">
<ul class="current">
<li class="toctree-l1"><a class="reference internal" href="../../../tutorials/index.html">Python Tutorials</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../../tutorials/getting-started/index.html">Getting Started</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/index.html">Crash Course</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/0-introduction.html">Introduction</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/1-nparray.html">Step 1: Manipulate data with NP on MXNet</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/2-create-nn.html">Step 2: Create a neural network</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/3-autograd.html">Step 3: Automatic differentiation with autograd</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/4-components.html">Step 4: Necessary components that are not in the network</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/5-datasets.html">Step 5: <code class="docutils literal notranslate"><span class="pre">Dataset</span></code>s and <code class="docutils literal notranslate"><span class="pre">DataLoader</span></code></a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/5-datasets.html#Using-own-data-with-included-Datasets">Using own data with included <code class="docutils literal notranslate"><span class="pre">Dataset</span></code>s</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/5-datasets.html#Using-your-own-data-with-custom-Datasets">Using your own data with custom <code class="docutils literal notranslate"><span class="pre">Dataset</span></code>s</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/5-datasets.html#New-in-MXNet-2.0:-faster-C++-backend-dataloaders">New in MXNet 2.0: faster C++ backend dataloaders</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/6-train-nn.html">Step 6: Train a Neural Network</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/7-use-gpus.html">Step 7: Load and Run a NN using GPU</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../tutorials/getting-started/to-mxnet/index.html">Moving to MXNet from Other Frameworks</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/to-mxnet/pytorch.html">PyTorch vs Apache MXNet</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../tutorials/getting-started/gluon_from_experiment_to_deployment.html">Gluon: from experiment to deployment</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../tutorials/getting-started/gluon_migration_guide.html">Gluon2.0: Migration Guide</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../tutorials/getting-started/logistic_regression_explained.html">Logistic regression explained</a></li>
<li class="toctree-l3"><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/image/mnist.html">MNIST</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../../tutorials/packages/index.html">Packages</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/autograd/index.html">Automatic Differentiation</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/gluon/index.html">Gluon</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/index.html">Blocks</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/custom-layer.html">Custom Layers</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/hybridize.html">Hybridize</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/init.html">Initialization</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/naming.html">Parameter and Block Naming</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/nn.html">Layers and Blocks</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/parameters.html">Parameter Management</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/activations/activations.html">Activation Blocks</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/data/index.html">Data Tutorials</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/data_augmentation.html">Image Augmentation</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/datasets.html">Gluon <code class="docutils literal notranslate"><span class="pre">Dataset</span></code>s and <code class="docutils literal notranslate"><span class="pre">DataLoader</span></code></a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/datasets.html#Using-own-data-with-included-Datasets">Using own data with included <code class="docutils literal notranslate"><span class="pre">Dataset</span></code>s</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/datasets.html#Using-own-data-with-custom-Datasets">Using own data with custom <code class="docutils literal notranslate"><span class="pre">Dataset</span></code>s</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/data/datasets.html#Appendix:-Upgrading-from-Module-DataIter-to-Gluon-DataLoader">Appendix: Upgrading from Module <code class="docutils literal notranslate"><span class="pre">DataIter</span></code> to Gluon <code class="docutils literal notranslate"><span class="pre">DataLoader</span></code></a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/image/index.html">Image Tutorials</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/image/info_gan.html">Image similarity search with InfoGAN</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/image/mnist.html">Handwritten Digit Recognition</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/loss/index.html">Losses</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/loss/custom-loss.html">Custom Loss Blocks</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/loss/kl_divergence.html">Kullback-Leibler (KL) Divergence</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/loss/loss.html">Loss functions</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/text/index.html">Text Tutorials</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/text/gnmt.html">Google Neural Machine Translation</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/text/transformer.html">Machine Translation with Transformer</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/training/index.html">Training</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/training/fit_api_tutorial.html">MXNet Gluon Fit API</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/training/trainer.html">Trainer</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/training/learning_rates/index.html">Learning Rates</a><ul>
<li class="toctree-l6"><a class="reference internal" href="../../../tutorials/packages/gluon/training/learning_rates/learning_rate_finder.html">Learning Rate Finder</a></li>
<li class="toctree-l6"><a class="reference internal" href="../../../tutorials/packages/gluon/training/learning_rates/learning_rate_schedules.html">Learning Rate Schedules</a></li>
<li class="toctree-l6"><a class="reference internal" href="../../../tutorials/packages/gluon/training/learning_rates/learning_rate_schedules_advanced.html">Advanced Learning Rate Schedules</a></li>
</ul>
</li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/training/normalization/index.html">Normalization Blocks</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/kvstore/index.html">KVStore</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/kvstore/kvstore.html">Distributed Key-Value Store</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/legacy/index.html">Legacy</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/legacy/ndarray/index.html">NDArray</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/legacy/ndarray/01-ndarray-intro.html">An Intro: Manipulate Data the MXNet Way with NDArray</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/legacy/ndarray/02-ndarray-operations.html">NDArray Operations</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/legacy/ndarray/03-ndarray-contexts.html">NDArray Contexts</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/legacy/ndarray/gotchas_numpy_in_mxnet.html">Gotchas using NumPy in Apache MXNet</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/legacy/ndarray/sparse/index.html">Tutorials</a><ul>
<li class="toctree-l6"><a class="reference internal" href="../../../tutorials/packages/legacy/ndarray/sparse/csr.html">CSRNDArray - NDArray in Compressed Sparse Row Storage Format</a></li>
<li class="toctree-l6"><a class="reference internal" href="../../../tutorials/packages/legacy/ndarray/sparse/row_sparse.html">RowSparseNDArray - NDArray for Sparse Gradient Updates</a></li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/np/index.html">What is NP on MXNet</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/np/cheat-sheet.html">The NP on MXNet cheat sheet</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/np/np-vs-numpy.html">Differences between NP on MXNet and NumPy</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/onnx/index.html">ONNX</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/onnx/fine_tuning_gluon.html">Fine-tuning an ONNX model</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/onnx/inference_on_onnx_model.html">Running inference on MXNet/Gluon from an ONNX model</a></li>
<li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/deploy/export/onnx.html">Export ONNX Models</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/optimizer/index.html">Optimizers</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/viz/index.html">Visualization</a><ul>
<li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/faq/visualize_graph">Visualize networks</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../../tutorials/performance/index.html">Performance</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../../tutorials/performance/compression/index.html">Compression</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/compression/int8.html">Deploy with int-8</a></li>
<li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/faq/float16">Float16</a></li>
<li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/faq/gradient_compression">Gradient Compression</a></li>
<li class="toctree-l4"><a class="reference external" href="https://gluon-cv.mxnet.io/build/examples_deployment/int8_inference.html">GluonCV with Quantized Models</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../tutorials/performance/backend/index.html">Accelerated Backend Tools</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/backend/dnnl/index.html">oneDNN</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/performance/backend/dnnl/dnnl_readme.html">Install MXNet with oneDNN</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/performance/backend/dnnl/dnnl_quantization.html">oneDNN Quantization</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../../tutorials/performance/backend/dnnl/dnnl_quantization_inc.html">Improving accuracy with Intel® Neural Compressor</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/backend/tvm.html">Use TVM</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/backend/profiler.html">Profiling MXNet Models</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/backend/amp.html">Using AMP: Automatic Mixed Precision</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../../tutorials/deploy/index.html">Deployment</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../../tutorials/deploy/export/index.html">Export</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/export/onnx.html">Exporting to ONNX format</a></li>
<li class="toctree-l4"><a class="reference external" href="https://gluon-cv.mxnet.io/build/examples_deployment/export_network.html">Export Gluon CV Models</a></li>
<li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Save / Load Parameters</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../tutorials/deploy/inference/index.html">Inference</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/inference/cpp.html">Deploy into C++</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/inference/image_classification_jetson.html">Image Classication using pretrained ResNet-50 model on Jetson module</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../../tutorials/deploy/run-on-aws/index.html">Run on AWS</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/run-on-aws/use_ec2.html">Run on an EC2 Instance</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/run-on-aws/use_sagemaker.html">Run on Amazon SageMaker</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/run-on-aws/cloud.html">MXNet on the Cloud</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../../tutorials/extend/index.html">Extend</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../../tutorials/extend/customop.html">Custom Numpy Operators</a></li>
<li class="toctree-l3"><a class="reference external" href="https://mxnet.apache.org/api/faq/new_op">New Operator Creation</a></li>
<li class="toctree-l3"><a class="reference external" href="https://mxnet.apache.org/api/faq/add_op_in_backend">New Operator in MXNet Backend</a></li>
<li class="toctree-l3"><a class="reference external" href="https://mxnet.apache.org/api/faq/using_rtc">Using RTC for CUDA kernels</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 current"><a class="reference internal" href="../../index.html">Python API</a><ul class="current">
<li class="toctree-l2"><a class="reference internal" href="../../np/index.html">mxnet.np</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../np/arrays.html">Array objects</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../np/arrays.ndarray.html">The N-dimensional array (<code class="xref py py-class docutils literal notranslate"><span class="pre">ndarray</span></code>)</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../np/arrays.indexing.html">Indexing</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../np/routines.html">Routines</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../np/routines.array-creation.html">Array creation routines</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.eye.html">mxnet.np.eye</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.empty.html">mxnet.np.empty</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.full.html">mxnet.np.full</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.identity.html">mxnet.np.identity</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.ones.html">mxnet.np.ones</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.ones_like.html">mxnet.np.ones_like</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.zeros.html">mxnet.np.zeros</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.zeros_like.html">mxnet.np.zeros_like</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.array.html">mxnet.np.array</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.copy.html">mxnet.np.copy</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.arange.html">mxnet.np.arange</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linspace.html">mxnet.np.linspace</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.logspace.html">mxnet.np.logspace</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.meshgrid.html">mxnet.np.meshgrid</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.tril.html">mxnet.np.tril</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../np/routines.array-manipulation.html">Array manipulation routines</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.reshape.html">mxnet.np.reshape</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.ravel.html">mxnet.np.ravel</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.ndarray.flatten.html">mxnet.np.ndarray.flatten</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.swapaxes.html">mxnet.np.swapaxes</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.ndarray.T.html">mxnet.np.ndarray.T</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.transpose.html">mxnet.np.transpose</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.moveaxis.html">mxnet.np.moveaxis</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.rollaxis.html">mxnet.np.rollaxis</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.expand_dims.html">mxnet.np.expand_dims</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.squeeze.html">mxnet.np.squeeze</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.broadcast_to.html">mxnet.np.broadcast_to</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.broadcast_arrays.html">mxnet.np.broadcast_arrays</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.atleast_1d.html">mxnet.np.atleast_1d</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.atleast_2d.html">mxnet.np.atleast_2d</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.atleast_3d.html">mxnet.np.atleast_3d</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.concatenate.html">mxnet.np.concatenate</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.stack.html">mxnet.np.stack</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.dstack.html">mxnet.np.dstack</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.vstack.html">mxnet.np.vstack</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.column_stack.html">mxnet.np.column_stack</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.hstack.html">mxnet.np.hstack</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.split.html">mxnet.np.split</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.hsplit.html">mxnet.np.hsplit</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.vsplit.html">mxnet.np.vsplit</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.array_split.html">mxnet.np.array_split</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.dsplit.html">mxnet.np.dsplit</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.tile.html">mxnet.np.tile</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.repeat.html">mxnet.np.repeat</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.unique.html">mxnet.np.unique</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.delete.html">mxnet.np.delete</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.insert.html">mxnet.np.insert</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.append.html">mxnet.np.append</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.resize.html">mxnet.np.resize</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.trim_zeros.html">mxnet.np.trim_zeros</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.reshape.html">mxnet.np.reshape</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.flip.html">mxnet.np.flip</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.roll.html">mxnet.np.roll</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.rot90.html">mxnet.np.rot90</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.fliplr.html">mxnet.np.fliplr</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.flipud.html">mxnet.np.flipud</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../np/routines.io.html">Input and output</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.genfromtxt.html">mxnet.np.genfromtxt</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.ndarray.tolist.html">mxnet.np.ndarray.tolist</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.set_printoptions.html">mxnet.np.set_printoptions</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../np/routines.linalg.html">Linear algebra (<code class="xref py py-mod docutils literal notranslate"><span class="pre">numpy.linalg</span></code>)</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.dot.html">mxnet.np.dot</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.vdot.html">mxnet.np.vdot</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.inner.html">mxnet.np.inner</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.outer.html">mxnet.np.outer</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.tensordot.html">mxnet.np.tensordot</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.einsum.html">mxnet.np.einsum</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.multi_dot.html">mxnet.np.linalg.multi_dot</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.matmul.html">mxnet.np.matmul</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.matrix_power.html">mxnet.np.linalg.matrix_power</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.kron.html">mxnet.np.kron</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.svd.html">mxnet.np.linalg.svd</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.cholesky.html">mxnet.np.linalg.cholesky</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.qr.html">mxnet.np.linalg.qr</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.eig.html">mxnet.np.linalg.eig</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.eigh.html">mxnet.np.linalg.eigh</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.eigvals.html">mxnet.np.linalg.eigvals</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.eigvalsh.html">mxnet.np.linalg.eigvalsh</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.norm.html">mxnet.np.linalg.norm</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.trace.html">mxnet.np.trace</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.cond.html">mxnet.np.linalg.cond</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.det.html">mxnet.np.linalg.det</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.matrix_rank.html">mxnet.np.linalg.matrix_rank</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.slogdet.html">mxnet.np.linalg.slogdet</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.solve.html">mxnet.np.linalg.solve</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.tensorsolve.html">mxnet.np.linalg.tensorsolve</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.lstsq.html">mxnet.np.linalg.lstsq</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.inv.html">mxnet.np.linalg.inv</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.pinv.html">mxnet.np.linalg.pinv</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.linalg.tensorinv.html">mxnet.np.linalg.tensorinv</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../np/routines.math.html">Mathematical functions</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.sin.html">mxnet.np.sin</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.cos.html">mxnet.np.cos</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.tan.html">mxnet.np.tan</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.arcsin.html">mxnet.np.arcsin</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.arccos.html">mxnet.np.arccos</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.arctan.html">mxnet.np.arctan</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.degrees.html">mxnet.np.degrees</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.radians.html">mxnet.np.radians</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.hypot.html">mxnet.np.hypot</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.arctan2.html">mxnet.np.arctan2</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.deg2rad.html">mxnet.np.deg2rad</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.rad2deg.html">mxnet.np.rad2deg</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.unwrap.html">mxnet.np.unwrap</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.sinh.html">mxnet.np.sinh</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.cosh.html">mxnet.np.cosh</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.tanh.html">mxnet.np.tanh</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.arcsinh.html">mxnet.np.arcsinh</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.arccosh.html">mxnet.np.arccosh</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.arctanh.html">mxnet.np.arctanh</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.rint.html">mxnet.np.rint</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.fix.html">mxnet.np.fix</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.floor.html">mxnet.np.floor</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.ceil.html">mxnet.np.ceil</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.trunc.html">mxnet.np.trunc</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.around.html">mxnet.np.around</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.round_.html">mxnet.np.round_</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.sum.html">mxnet.np.sum</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.prod.html">mxnet.np.prod</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.cumsum.html">mxnet.np.cumsum</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.nanprod.html">mxnet.np.nanprod</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.nansum.html">mxnet.np.nansum</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.cumprod.html">mxnet.np.cumprod</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.nancumprod.html">mxnet.np.nancumprod</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.nancumsum.html">mxnet.np.nancumsum</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.diff.html">mxnet.np.diff</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.ediff1d.html">mxnet.np.ediff1d</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.cross.html">mxnet.np.cross</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.trapz.html">mxnet.np.trapz</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.exp.html">mxnet.np.exp</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.expm1.html">mxnet.np.expm1</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.log.html">mxnet.np.log</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.log10.html">mxnet.np.log10</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.log2.html">mxnet.np.log2</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.log1p.html">mxnet.np.log1p</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.logaddexp.html">mxnet.np.logaddexp</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.i0.html">mxnet.np.i0</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.ldexp.html">mxnet.np.ldexp</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.signbit.html">mxnet.np.signbit</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.copysign.html">mxnet.np.copysign</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.frexp.html">mxnet.np.frexp</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.spacing.html">mxnet.np.spacing</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.lcm.html">mxnet.np.lcm</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.gcd.html">mxnet.np.gcd</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.add.html">mxnet.np.add</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.reciprocal.html">mxnet.np.reciprocal</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.negative.html">mxnet.np.negative</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.divide.html">mxnet.np.divide</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.power.html">mxnet.np.power</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.subtract.html">mxnet.np.subtract</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.mod.html">mxnet.np.mod</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.multiply.html">mxnet.np.multiply</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.true_divide.html">mxnet.np.true_divide</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.remainder.html">mxnet.np.remainder</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.positive.html">mxnet.np.positive</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.float_power.html">mxnet.np.float_power</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.fmod.html">mxnet.np.fmod</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.modf.html">mxnet.np.modf</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.divmod.html">mxnet.np.divmod</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.floor_divide.html">mxnet.np.floor_divide</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.clip.html">mxnet.np.clip</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.sqrt.html">mxnet.np.sqrt</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.cbrt.html">mxnet.np.cbrt</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.square.html">mxnet.np.square</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.absolute.html">mxnet.np.absolute</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.sign.html">mxnet.np.sign</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.maximum.html">mxnet.np.maximum</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.minimum.html">mxnet.np.minimum</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.fabs.html">mxnet.np.fabs</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.heaviside.html">mxnet.np.heaviside</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.fmax.html">mxnet.np.fmax</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.fmin.html">mxnet.np.fmin</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.nan_to_num.html">mxnet.np.nan_to_num</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.interp.html">mxnet.np.interp</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../np/random/index.html">np.random</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.choice.html">mxnet.np.random.choice</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.shuffle.html">mxnet.np.random.shuffle</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.normal.html">mxnet.np.random.normal</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.uniform.html">mxnet.np.random.uniform</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.rand.html">mxnet.np.random.rand</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.randint.html">mxnet.np.random.randint</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.beta.html">mxnet.np.random.beta</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.chisquare.html">mxnet.np.random.chisquare</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.exponential.html">mxnet.np.random.exponential</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.f.html">mxnet.np.random.f</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.gamma.html">mxnet.np.random.gamma</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.gumbel.html">mxnet.np.random.gumbel</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.laplace.html">mxnet.np.random.laplace</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.logistic.html">mxnet.np.random.logistic</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.lognormal.html">mxnet.np.random.lognormal</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.multinomial.html">mxnet.np.random.multinomial</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.multivariate_normal.html">mxnet.np.random.multivariate_normal</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.pareto.html">mxnet.np.random.pareto</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.power.html">mxnet.np.random.power</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.rayleigh.html">mxnet.np.random.rayleigh</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/random/generated/mxnet.np.random.weibull.html">mxnet.np.random.weibull</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../np/routines.sort.html">Sorting, searching, and counting</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.ndarray.sort.html">mxnet.np.ndarray.sort</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.sort.html">mxnet.np.sort</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.lexsort.html">mxnet.np.lexsort</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.argsort.html">mxnet.np.argsort</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.msort.html">mxnet.np.msort</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.partition.html">mxnet.np.partition</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.argpartition.html">mxnet.np.argpartition</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.argmax.html">mxnet.np.argmax</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.argmin.html">mxnet.np.argmin</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.nanargmax.html">mxnet.np.nanargmax</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.nanargmin.html">mxnet.np.nanargmin</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.argwhere.html">mxnet.np.argwhere</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.nonzero.html">mxnet.np.nonzero</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.flatnonzero.html">mxnet.np.flatnonzero</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.where.html">mxnet.np.where</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.searchsorted.html">mxnet.np.searchsorted</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.extract.html">mxnet.np.extract</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.count_nonzero.html">mxnet.np.count_nonzero</a></li>
</ul>
</li>
<li class="toctree-l4"><a class="reference internal" href="../../np/routines.statistics.html">Statistics</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.min.html">mxnet.np.min</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.max.html">mxnet.np.max</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.amin.html">mxnet.np.amin</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.amax.html">mxnet.np.amax</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.nanmin.html">mxnet.np.nanmin</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.nanmax.html">mxnet.np.nanmax</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.ptp.html">mxnet.np.ptp</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.percentile.html">mxnet.np.percentile</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.nanpercentile.html">mxnet.np.nanpercentile</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.quantile.html">mxnet.np.quantile</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.nanquantile.html">mxnet.np.nanquantile</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.mean.html">mxnet.np.mean</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.std.html">mxnet.np.std</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.var.html">mxnet.np.var</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.median.html">mxnet.np.median</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.average.html">mxnet.np.average</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.nanmedian.html">mxnet.np.nanmedian</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.nanstd.html">mxnet.np.nanstd</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.nanvar.html">mxnet.np.nanvar</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.corrcoef.html">mxnet.np.corrcoef</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.correlate.html">mxnet.np.correlate</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.cov.html">mxnet.np.cov</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.histogram.html">mxnet.np.histogram</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.histogram2d.html">mxnet.np.histogram2d</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.histogramdd.html">mxnet.np.histogramdd</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.bincount.html">mxnet.np.bincount</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.histogram_bin_edges.html">mxnet.np.histogram_bin_edges</a></li>
<li class="toctree-l5"><a class="reference internal" href="../../np/generated/mxnet.np.digitize.html">mxnet.np.digitize</a></li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../npx/index.html">NPX: NumPy Neural Network Extension</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.set_np.html">mxnet.npx.set_np</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.reset_np.html">mxnet.npx.reset_np</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.cpu.html">mxnet.npx.cpu</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.cpu_pinned.html">mxnet.npx.cpu_pinned</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.gpu.html">mxnet.npx.gpu</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.gpu_memory_info.html">mxnet.npx.gpu_memory_info</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.current_device.html">mxnet.npx.current_device</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.num_gpus.html">mxnet.npx.num_gpus</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.activation.html">mxnet.npx.activation</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.batch_norm.html">mxnet.npx.batch_norm</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.convolution.html">mxnet.npx.convolution</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.dropout.html">mxnet.npx.dropout</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.embedding.html">mxnet.npx.embedding</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.fully_connected.html">mxnet.npx.fully_connected</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.layer_norm.html">mxnet.npx.layer_norm</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.pooling.html">mxnet.npx.pooling</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.rnn.html">mxnet.npx.rnn</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.leaky_relu.html">mxnet.npx.leaky_relu</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.multibox_detection.html">mxnet.npx.multibox_detection</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.multibox_prior.html">mxnet.npx.multibox_prior</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.multibox_target.html">mxnet.npx.multibox_target</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.roi_pooling.html">mxnet.npx.roi_pooling</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.sigmoid.html">mxnet.npx.sigmoid</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.relu.html">mxnet.npx.relu</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.smooth_l1.html">mxnet.npx.smooth_l1</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.softmax.html">mxnet.npx.softmax</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.log_softmax.html">mxnet.npx.log_softmax</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.topk.html">mxnet.npx.topk</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.waitall.html">mxnet.npx.waitall</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.load.html">mxnet.npx.load</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.save.html">mxnet.npx.save</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.one_hot.html">mxnet.npx.one_hot</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.pick.html">mxnet.npx.pick</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.reshape_like.html">mxnet.npx.reshape_like</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.batch_flatten.html">mxnet.npx.batch_flatten</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.batch_dot.html">mxnet.npx.batch_dot</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.gamma.html">mxnet.npx.gamma</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../npx/generated/mxnet.npx.sequence_mask.html">mxnet.npx.sequence_mask</a></li>
</ul>
</li>
<li class="toctree-l2 current"><a class="reference internal" href="../index.html">mxnet.gluon</a><ul class="current">
<li class="toctree-l3"><a class="reference internal" href="../block.html">gluon.Block</a></li>
<li class="toctree-l3"><a class="reference internal" href="../hybrid_block.html">gluon.HybridBlock</a></li>
<li class="toctree-l3"><a class="reference internal" href="../symbol_block.html">gluon.SymbolBlock</a></li>
<li class="toctree-l3"><a class="reference internal" href="../constant.html">gluon.Constant</a></li>
<li class="toctree-l3"><a class="reference internal" href="../parameter.html">gluon.Parameter</a></li>
<li class="toctree-l3"><a class="reference internal" href="../trainer.html">gluon.Trainer</a></li>
<li class="toctree-l3"><a class="reference internal" href="../contrib/index.html">gluon.contrib</a></li>
<li class="toctree-l3"><a class="reference internal" href="../data/index.html">gluon.data</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../data/vision/index.html">data.vision</a><ul>
<li class="toctree-l5"><a class="reference internal" href="../data/vision/datasets/index.html">vision.datasets</a></li>
<li class="toctree-l5"><a class="reference internal" href="../data/vision/transforms/index.html">vision.transforms</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l3 current"><a class="current reference internal" href="#">gluon.loss</a></li>
<li class="toctree-l3"><a class="reference internal" href="../metric/index.html">gluon.metric</a></li>
<li class="toctree-l3"><a class="reference internal" href="../model_zoo/index.html">gluon.model_zoo.vision</a></li>
<li class="toctree-l3"><a class="reference internal" href="../nn/index.html">gluon.nn</a></li>
<li class="toctree-l3"><a class="reference internal" href="../rnn/index.html">gluon.rnn</a></li>
<li class="toctree-l3"><a class="reference internal" href="../utils/index.html">gluon.utils</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../autograd/index.html">mxnet.autograd</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../initializer/index.html">mxnet.initializer</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../optimizer/index.html">mxnet.optimizer</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../lr_scheduler/index.html">mxnet.lr_scheduler</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../kvstore/index.html">KVStore: Communication for Distributed Training</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../kvstore/index.html#horovod">Horovod</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../kvstore/generated/mxnet.kvstore.Horovod.html">mxnet.kvstore.Horovod</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../kvstore/index.html#byteps">BytePS</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../kvstore/generated/mxnet.kvstore.BytePS.html">mxnet.kvstore.BytePS</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../kvstore/index.html#kvstore-interface">KVStore Interface</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../kvstore/generated/mxnet.kvstore.KVStore.html">mxnet.kvstore.KVStore</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../kvstore/generated/mxnet.kvstore.KVStoreBase.html">mxnet.kvstore.KVStoreBase</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../kvstore/generated/mxnet.kvstore.KVStoreServer.html">mxnet.kvstore.KVStoreServer</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../contrib/index.html">mxnet.contrib</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../contrib/io/index.html">contrib.io</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../contrib/ndarray/index.html">contrib.ndarray</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../contrib/onnx/index.html">contrib.onnx</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../contrib/quantization/index.html">contrib.quantization</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../contrib/symbol/index.html">contrib.symbol</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../contrib/tensorboard/index.html">contrib.tensorboard</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../contrib/tensorrt/index.html">contrib.tensorrt</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../contrib/text/index.html">contrib.text</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../legacy/index.html">Legacy</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../legacy/callback/index.html">mxnet.callback</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../legacy/image/index.html">mxnet.image</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../legacy/io/index.html">mxnet.io</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../legacy/ndarray/index.html">mxnet.ndarray</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../legacy/ndarray/ndarray.html">ndarray</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../legacy/ndarray/contrib/index.html">ndarray.contrib</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../legacy/ndarray/image/index.html">ndarray.image</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../legacy/ndarray/linalg/index.html">ndarray.linalg</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../legacy/ndarray/op/index.html">ndarray.op</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../legacy/ndarray/random/index.html">ndarray.random</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../legacy/ndarray/register/index.html">ndarray.register</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../legacy/ndarray/sparse/index.html">ndarray.sparse</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../legacy/ndarray/utils/index.html">ndarray.utils</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../legacy/recordio/index.html">mxnet.recordio</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../legacy/symbol/index.html">mxnet.symbol</a><ul>
<li class="toctree-l4"><a class="reference internal" href="../../legacy/symbol/symbol.html">symbol</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../legacy/symbol/contrib/index.html">symbol.contrib</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../legacy/symbol/image/index.html">symbol.image</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../legacy/symbol/linalg/index.html">symbol.linalg</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../legacy/symbol/op/index.html">symbol.op</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../legacy/symbol/random/index.html">symbol.random</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../legacy/symbol/register/index.html">symbol.register</a></li>
<li class="toctree-l4"><a class="reference internal" href="../../legacy/symbol/sparse/index.html">symbol.sparse</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../../legacy/visualization/index.html">mxnet.visualization</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../device/index.html">mxnet.device</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../engine/index.html">mxnet.engine</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../executor/index.html">mxnet.executor</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../kvstore_server/index.html">mxnet.kvstore_server</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../profiler/index.html">mxnet.profiler</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../rtc/index.html">mxnet.rtc</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../runtime/index.html">mxnet.runtime</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../runtime/generated/mxnet.runtime.Feature.html">mxnet.runtime.Feature</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../runtime/generated/mxnet.runtime.Features.html">mxnet.runtime.Features</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../runtime/generated/mxnet.runtime.feature_list.html">mxnet.runtime.feature_list</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../test_utils/index.html">mxnet.test_utils</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../util/index.html">mxnet.util</a></li>
</ul>
</li>
</ul>
</nav>
</div>
</header>
<div class="document">
<div class="page-content" role="main">
<div class="section" id="gluon-loss">
<h1>gluon.loss<a class="headerlink" href="#gluon-loss" title="Permalink to this headline"></a></h1>
<p>Gluon provides pre-defined loss functions in the <a class="reference internal" href="#module-mxnet.gluon.loss" title="mxnet.gluon.loss"><code class="xref py py-mod docutils literal notranslate"><span class="pre">mxnet.gluon.loss</span></code></a>
module.</p>
<span class="target" id="module-mxnet.gluon.loss"></span><p>losses for training neural networks</p>
<p><strong>Classes</strong></p>
<table class="longtable docutils align-default">
<colgroup>
<col style="width: 10%" />
<col style="width: 90%" />
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.Loss" title="mxnet.gluon.loss.Loss"><code class="xref py py-obj docutils literal notranslate"><span class="pre">Loss</span></code></a>(weight, batch_axis, **kwargs)</p></td>
<td><p>Base class for loss.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L2Loss" title="mxnet.gluon.loss.L2Loss"><code class="xref py py-obj docutils literal notranslate"><span class="pre">L2Loss</span></code></a>([weight, batch_axis])</p></td>
<td><p>Calculates the mean squared error between <cite>label</cite> and <cite>pred</cite>.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L1Loss" title="mxnet.gluon.loss.L1Loss"><code class="xref py py-obj docutils literal notranslate"><span class="pre">L1Loss</span></code></a>([weight, batch_axis])</p></td>
<td><p>Calculates the mean absolute error between <cite>label</cite> and <cite>pred</cite>.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss" title="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss"><code class="xref py py-obj docutils literal notranslate"><span class="pre">SigmoidBinaryCrossEntropyLoss</span></code></a>([…])</p></td>
<td><p>The cross-entropy loss for binary classification.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SigmoidBCELoss" title="mxnet.gluon.loss.SigmoidBCELoss"><code class="xref py py-obj docutils literal notranslate"><span class="pre">SigmoidBCELoss</span></code></a></p></td>
<td><p>The cross-entropy loss for binary classification.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss" title="mxnet.gluon.loss.SoftmaxCrossEntropyLoss"><code class="xref py py-obj docutils literal notranslate"><span class="pre">SoftmaxCrossEntropyLoss</span></code></a>([axis, …])</p></td>
<td><p>Computes the softmax cross entropy loss.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SoftmaxCELoss" title="mxnet.gluon.loss.SoftmaxCELoss"><code class="xref py py-obj docutils literal notranslate"><span class="pre">SoftmaxCELoss</span></code></a></p></td>
<td><p>Computes the softmax cross entropy loss.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.KLDivLoss" title="mxnet.gluon.loss.KLDivLoss"><code class="xref py py-obj docutils literal notranslate"><span class="pre">KLDivLoss</span></code></a>([from_logits, axis, weight, …])</p></td>
<td><p>The Kullback-Leibler divergence loss.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CTCLoss" title="mxnet.gluon.loss.CTCLoss"><code class="xref py py-obj docutils literal notranslate"><span class="pre">CTCLoss</span></code></a>([layout, label_layout, weight])</p></td>
<td><p>Connectionist Temporal Classification Loss.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HuberLoss" title="mxnet.gluon.loss.HuberLoss"><code class="xref py py-obj docutils literal notranslate"><span class="pre">HuberLoss</span></code></a>([rho, weight, batch_axis])</p></td>
<td><p>Calculates smoothed L1 loss that is equal to L1 loss if absolute error exceeds rho but is equal to L2 loss otherwise.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HingeLoss" title="mxnet.gluon.loss.HingeLoss"><code class="xref py py-obj docutils literal notranslate"><span class="pre">HingeLoss</span></code></a>([margin, weight, batch_axis])</p></td>
<td><p>Calculates the hinge loss function often used in SVMs:</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SquaredHingeLoss" title="mxnet.gluon.loss.SquaredHingeLoss"><code class="xref py py-obj docutils literal notranslate"><span class="pre">SquaredHingeLoss</span></code></a>([margin, weight, batch_axis])</p></td>
<td><p>Calculates the soft-margin loss function used in SVMs:</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.LogisticLoss" title="mxnet.gluon.loss.LogisticLoss"><code class="xref py py-obj docutils literal notranslate"><span class="pre">LogisticLoss</span></code></a>([weight, batch_axis, label_format])</p></td>
<td><p>Calculates the logistic loss (for binary losses only):</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.TripletLoss" title="mxnet.gluon.loss.TripletLoss"><code class="xref py py-obj docutils literal notranslate"><span class="pre">TripletLoss</span></code></a>([margin, weight, batch_axis])</p></td>
<td><p>Calculates triplet loss given three input tensors and a positive margin.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.PoissonNLLLoss" title="mxnet.gluon.loss.PoissonNLLLoss"><code class="xref py py-obj docutils literal notranslate"><span class="pre">PoissonNLLLoss</span></code></a>([weight, from_logits, …])</p></td>
<td><p>For a target (Random Variable) in a Poisson distribution, the function calculates the Negative Log likelihood loss.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CosineEmbeddingLoss" title="mxnet.gluon.loss.CosineEmbeddingLoss"><code class="xref py py-obj docutils literal notranslate"><span class="pre">CosineEmbeddingLoss</span></code></a>([weight, batch_axis, margin])</p></td>
<td><p>For a target label 1 or -1, vectors input1 and input2, the function computes the cosine distance between the vectors.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SDMLLoss" title="mxnet.gluon.loss.SDMLLoss"><code class="xref py py-obj docutils literal notranslate"><span class="pre">SDMLLoss</span></code></a>([smoothing_parameter, weight, …])</p></td>
<td><p>Calculates Batchwise Smoothed Deep Metric Learning (SDML) Loss given two input tensors and a smoothing weight SDM Loss learns similarity between paired samples by using unpaired samples in the minibatch as potential negative examples.</p></td>
</tr>
</tbody>
</table>
<dl class="class">
<dt id="mxnet.gluon.loss.Loss">
<em class="property">class </em><code class="sig-name descname">Loss</code><span class="sig-paren">(</span><em class="sig-param">weight</em>, <em class="sig-param">batch_axis</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#Loss"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.Loss" title="Permalink to this definition"></a></dt>
<dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.block.HybridBlock</span></code></p>
<p>Base class for loss.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>weight</strong> (<em>float</em><em> or </em><em>None</em>) – Global scalar weight for loss.</p></li>
<li><p><strong>batch_axis</strong> (<em>int</em><em>, </em><em>default 0</em>) – The axis that represents mini-batch.</p></li>
</ul>
</dd>
</dl>
<p><strong>Methods</strong></p>
<table class="longtable docutils align-default">
<colgroup>
<col style="width: 10%" />
<col style="width: 90%" />
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.Loss.apply" title="mxnet.gluon.loss.Loss.apply"><code class="xref py py-obj docutils literal notranslate"><span class="pre">apply</span></code></a>(fn)</p></td>
<td><p>Applies <code class="docutils literal notranslate"><span class="pre">fn</span></code> recursively to every child block as well as self.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.Loss.collect_params" title="mxnet.gluon.loss.Loss.collect_params"><code class="xref py py-obj docutils literal notranslate"><span class="pre">collect_params</span></code></a>([select])</p></td>
<td><p>Returns a <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> containing this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and all of its children’s Parameters(default), also can returns the select <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> which match some given regular expressions.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.Loss.export" title="mxnet.gluon.loss.Loss.export"><code class="xref py py-obj docutils literal notranslate"><span class="pre">export</span></code></a>(path[, epoch, remove_amp_cast])</p></td>
<td><p>Export HybridBlock to json format that can be loaded by <cite>gluon.SymbolBlock.imports</cite> or the C++ interface.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.Loss.forward" title="mxnet.gluon.loss.Loss.forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">forward</span></code></a>(x, *args)</p></td>
<td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.Loss.hybridize" title="mxnet.gluon.loss.Loss.hybridize"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybridize</span></code></a>([active, partition_if_dynamic, …])</p></td>
<td><p>Activates or deactivates <code class="xref py py-class docutils literal notranslate"><span class="pre">HybridBlock</span></code> s recursively.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.Loss.infer_shape" title="mxnet.gluon.loss.Loss.infer_shape"><code class="xref py py-obj docutils literal notranslate"><span class="pre">infer_shape</span></code></a>(*args)</p></td>
<td><p>Infers shape of Parameters from inputs.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.Loss.infer_type" title="mxnet.gluon.loss.Loss.infer_type"><code class="xref py py-obj docutils literal notranslate"><span class="pre">infer_type</span></code></a>(*args)</p></td>
<td><p>Infers data type of Parameters from inputs.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.Loss.initialize" title="mxnet.gluon.loss.Loss.initialize"><code class="xref py py-obj docutils literal notranslate"><span class="pre">initialize</span></code></a>([init, device, verbose, force_reinit])</p></td>
<td><p>Initializes <code class="xref py py-class docutils literal notranslate"><span class="pre">Parameter</span></code> s of this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and its children.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.Loss.load" title="mxnet.gluon.loss.Loss.load"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load</span></code></a>(prefix)</p></td>
<td><p>Load a model saved using the <cite>save</cite> API</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.Loss.load_dict" title="mxnet.gluon.loss.Loss.load_dict"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load_dict</span></code></a>(param_dict[, device, …])</p></td>
<td><p>Load parameters from dict</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.Loss.load_parameters" title="mxnet.gluon.loss.Loss.load_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load_parameters</span></code></a>(filename[, device, …])</p></td>
<td><p>Load parameters from file previously saved by <cite>save_parameters</cite>.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.Loss.optimize_for" title="mxnet.gluon.loss.Loss.optimize_for"><code class="xref py py-obj docutils literal notranslate"><span class="pre">optimize_for</span></code></a>(x, *args[, backend, clear, …])</p></td>
<td><p>Partitions the current HybridBlock and optimizes it for a given backend without executing a forward pass.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.Loss.register_forward_hook" title="mxnet.gluon.loss.Loss.register_forward_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_forward_hook</span></code></a>(hook)</p></td>
<td><p>Registers a forward hook on the block.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.Loss.register_forward_pre_hook" title="mxnet.gluon.loss.Loss.register_forward_pre_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_forward_pre_hook</span></code></a>(hook)</p></td>
<td><p>Registers a forward pre-hook on the block.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.Loss.register_op_hook" title="mxnet.gluon.loss.Loss.register_op_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_op_hook</span></code></a>(callback[, monitor_all])</p></td>
<td><p>Install op hook for block recursively.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.Loss.reset_ctx" title="mxnet.gluon.loss.Loss.reset_ctx"><code class="xref py py-obj docutils literal notranslate"><span class="pre">reset_ctx</span></code></a>(ctx)</p></td>
<td><p>This function has been deprecated.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.Loss.reset_device" title="mxnet.gluon.loss.Loss.reset_device"><code class="xref py py-obj docutils literal notranslate"><span class="pre">reset_device</span></code></a>(device)</p></td>
<td><p>Re-assign all Parameters to other devices.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.Loss.save" title="mxnet.gluon.loss.Loss.save"><code class="xref py py-obj docutils literal notranslate"><span class="pre">save</span></code></a>(prefix)</p></td>
<td><p>Save the model architecture and parameters to load again later</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.Loss.save_parameters" title="mxnet.gluon.loss.Loss.save_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">save_parameters</span></code></a>(filename[, deduplicate])</p></td>
<td><p>Save parameters to file.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.Loss.setattr" title="mxnet.gluon.loss.Loss.setattr"><code class="xref py py-obj docutils literal notranslate"><span class="pre">setattr</span></code></a>(name, value)</p></td>
<td><p>Set an attribute to a new value for all Parameters.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.Loss.share_parameters" title="mxnet.gluon.loss.Loss.share_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">share_parameters</span></code></a>(shared)</p></td>
<td><p>Share parameters recursively inside the model.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.Loss.summary" title="mxnet.gluon.loss.Loss.summary"><code class="xref py py-obj docutils literal notranslate"><span class="pre">summary</span></code></a>(*inputs)</p></td>
<td><p>Print the summary of the model’s output and parameters.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.Loss.zero_grad" title="mxnet.gluon.loss.Loss.zero_grad"><code class="xref py py-obj docutils literal notranslate"><span class="pre">zero_grad</span></code></a>()</p></td>
<td><p>Sets all Parameters’ gradient buffer to 0.</p></td>
</tr>
</tbody>
</table>
<p><strong>Attributes</strong></p>
<table class="longtable docutils align-default">
<colgroup>
<col style="width: 10%" />
<col style="width: 90%" />
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.Loss.params" title="mxnet.gluon.loss.Loss.params"><code class="xref py py-obj docutils literal notranslate"><span class="pre">params</span></code></a></p></td>
<td><p>Returns this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code>’s parameter dictionary (does not include its children’s parameters).</p></td>
</tr>
</tbody>
</table>
<dl class="method">
<dt id="mxnet.gluon.loss.Loss.apply">
<code class="sig-name descname">apply</code><span class="sig-paren">(</span><em class="sig-param">fn</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.Loss.apply" title="Permalink to this definition"></a></dt>
<dd><p>Applies <code class="docutils literal notranslate"><span class="pre">fn</span></code> recursively to every child block as well as self.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>fn</strong> (<em>callable</em>) – Function to be applied to each submodule, of form <cite>fn(block)</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>this block</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.Loss.collect_params">
<code class="sig-name descname">collect_params</code><span class="sig-paren">(</span><em class="sig-param">select=None</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.Loss.collect_params" title="Permalink to this definition"></a></dt>
<dd><p>Returns a <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> containing this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and all of its
children’s Parameters(default), also can returns the select <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code>
which match some given regular expressions.</p>
<p>For example, collect the specified parameters in [‘conv1.weight’, ‘conv1.bias’, ‘fc.weight’,
‘fc.bias’]:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">collect_params</span><span class="p">(</span><span class="s1">&#39;conv1.weight|conv1.bias|fc.weight|fc.bias&#39;</span><span class="p">)</span>
</pre></div>
</div>
<p>or collect all parameters whose names end with ‘weight’ or ‘bias’, this can be done
using regular expressions:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">collect_params</span><span class="p">(</span><span class="s1">&#39;.*weight|.*bias&#39;</span><span class="p">)</span>
</pre></div>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>select</strong> (<em>str</em>) – regular expressions</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>The selected <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.Loss.export">
<code class="sig-name descname">export</code><span class="sig-paren">(</span><em class="sig-param">path</em>, <em class="sig-param">epoch=0</em>, <em class="sig-param">remove_amp_cast=True</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.Loss.export" title="Permalink to this definition"></a></dt>
<dd><p>Export HybridBlock to json format that can be loaded by
<cite>gluon.SymbolBlock.imports</cite> or the C++ interface.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>When there are only one input, it will have name <cite>data</cite>. When there
Are more than one inputs, they will be named as <cite>data0</cite>, <cite>data1</cite>, etc.</p>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>path</strong> (<em>str</em><em> or </em><em>None</em>) – Path to save model. Two files <cite>path-symbol.json</cite> and <cite>path-xxxx.params</cite>
will be created, where xxxx is the 4 digits epoch number.
If None, do not export to file but return Python Symbol object and
corresponding dictionary of parameters.</p></li>
<li><p><strong>epoch</strong> (<em>int</em>) – Epoch number of saved model.</p></li>
<li><p><strong>remove_amp_cast</strong> (<em>bool</em><em>, </em><em>optional</em>) – Whether to remove the amp_cast and amp_multicast operators, before saving the model.</p></li>
</ul>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p><ul class="simple">
<li><p><strong>symbol_filename</strong> (<em>str</em>) – Filename to which model symbols were saved, including <cite>path</cite> prefix.</p></li>
<li><p><strong>params_filename</strong> (<em>str</em>) – Filename to which model parameters were saved, including <cite>path</cite> prefix.</p></li>
</ul>
</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.Loss.forward">
<code class="sig-name descname">forward</code><span class="sig-paren">(</span><em class="sig-param">x</em>, <em class="sig-param">*args</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#Loss.forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.Loss.forward" title="Permalink to this definition"></a></dt>
<dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>x</strong> (<a class="reference internal" href="../../legacy/symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li>
<li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.Loss.hybridize">
<code class="sig-name descname">hybridize</code><span class="sig-paren">(</span><em class="sig-param">active=True</em>, <em class="sig-param">partition_if_dynamic=True</em>, <em class="sig-param">static_alloc=False</em>, <em class="sig-param">static_shape=False</em>, <em class="sig-param">inline_limit=2</em>, <em class="sig-param">forward_bulk_size=None</em>, <em class="sig-param">backward_bulk_size=None</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.Loss.hybridize" title="Permalink to this definition"></a></dt>
<dd><p>Activates or deactivates <code class="xref py py-class docutils literal notranslate"><span class="pre">HybridBlock</span></code> s recursively. Has no effect on
non-hybrid children.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>active</strong> (<em>bool</em><em>, </em><em>default True</em>) – Whether to turn hybrid on or off.</p></li>
<li><p><strong>partition_if_dynamic</strong> (<em>bool</em><em>, </em><em>default False</em>) – whether to partition the graph when dynamic shape op exists</p></li>
<li><p><strong>static_alloc</strong> (<em>bool</em><em>, </em><em>default False</em>) – Statically allocate memory to improve speed. Memory usage may increase.</p></li>
<li><p><strong>static_shape</strong> (<em>bool</em><em>, </em><em>default False</em>) – Optimize for invariant input shapes between iterations. Must also
set static_alloc to True. Change of input shapes is still allowed
but slower.</p></li>
<li><p><strong>inline_limit</strong> (<em>optional int</em><em>, </em><em>default 2</em>) – Maximum number of operators that can be inlined.</p></li>
<li><p><strong>forward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during forward pass.</p></li>
<li><p><strong>backward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during backward pass.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.Loss.infer_shape">
<code class="sig-name descname">infer_shape</code><span class="sig-paren">(</span><em class="sig-param">*args</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.Loss.infer_shape" title="Permalink to this definition"></a></dt>
<dd><p>Infers shape of Parameters from inputs.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.Loss.infer_type">
<code class="sig-name descname">infer_type</code><span class="sig-paren">(</span><em class="sig-param">*args</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.Loss.infer_type" title="Permalink to this definition"></a></dt>
<dd><p>Infers data type of Parameters from inputs.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.Loss.initialize">
<code class="sig-name descname">initialize</code><span class="sig-paren">(</span><em class="sig-param">init=&lt;mxnet.initializer.Uniform object&gt;</em>, <em class="sig-param">device=None</em>, <em class="sig-param">verbose=False</em>, <em class="sig-param">force_reinit=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.Loss.initialize" title="Permalink to this definition"></a></dt>
<dd><p>Initializes <code class="xref py py-class docutils literal notranslate"><span class="pre">Parameter</span></code> s of this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and its children.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>init</strong> (<a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Global default Initializer to be used when <code class="xref py py-meth docutils literal notranslate"><span class="pre">Parameter.init()</span></code> is <code class="docutils literal notranslate"><span class="pre">None</span></code>.
Otherwise, <code class="xref py py-meth docutils literal notranslate"><span class="pre">Parameter.init()</span></code> takes precedence.</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em> or </em><em>list of Device</em>) – Keeps a copy of Parameters on one or many device(s).</p></li>
<li><p><strong>verbose</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to verbosely print out details on initialization.</p></li>
<li><p><strong>force_reinit</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to force re-initialization if parameter is already initialized.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.Loss.load">
<code class="sig-name descname">load</code><span class="sig-paren">(</span><em class="sig-param">prefix</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.Loss.load" title="Permalink to this definition"></a></dt>
<dd><p>Load a model saved using the <cite>save</cite> API</p>
<p>Reconfigures a model using the saved configuration. This function
does not regenerate the model architecture. It resets each Block’s
parameter UUIDs as they were when saved in order to match the names of the
saved parameters.</p>
<p>This function assumes the Blocks in the model were created in the same
order they were when the model was saved. This is because each Block is
uniquely identified by Block class name and a unique ID in order (since
its an OrderedDict) and uses the unique ID to denote that specific Block.</p>
<p>Assumes that the model is created in an identical order every time.
If the model is not able to be recreated deterministically do not
use this set of APIs to save/load your model.</p>
<p>For HybridBlocks, the cached_graph (Symbol &amp; inputs) and settings are
restored if it had been hybridized before saving.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>prefix</strong> (<em>str</em>) – The prefix to use in filenames for loading this model:
&lt;prefix&gt;-model.json and &lt;prefix&gt;-model.params</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.Loss.load_dict">
<code class="sig-name descname">load_dict</code><span class="sig-paren">(</span><em class="sig-param">param_dict</em>, <em class="sig-param">device=None</em>, <em class="sig-param">allow_missing=False</em>, <em class="sig-param">ignore_extra=False</em>, <em class="sig-param">cast_dtype=False</em>, <em class="sig-param">dtype_source='current'</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.Loss.load_dict" title="Permalink to this definition"></a></dt>
<dd><p>Load parameters from dict</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>param_dict</strong> (<em>dict</em>) – Dictionary containing model parameters</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em>, </em><em>optional</em>) – Device context on which the memory is allocated. Default is
<cite>mxnet.device.current_device()</cite>.</p></li>
<li><p><strong>allow_missing</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently skip loading parameters not represented in the file.</p></li>
<li><p><strong>ignore_extra</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently ignore parameters from the file that are not
present in this dict.</p></li>
<li><p><strong>cast_dtype</strong> (<em>bool</em><em>, </em><em>default False</em>) – Cast the data type of the NDArray loaded from the checkpoint to the dtype
provided by the Parameter if any</p></li>
<li><p><strong>dtype_source</strong> (<em>str</em><em>, </em><em>default 'current'</em>) – must be in {‘current’, ‘saved’}
Only valid if cast_dtype=True, specify the source of the dtype for casting
the parameters</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.Loss.load_parameters">
<code class="sig-name descname">load_parameters</code><span class="sig-paren">(</span><em class="sig-param">filename</em>, <em class="sig-param">device=None</em>, <em class="sig-param">allow_missing=False</em>, <em class="sig-param">ignore_extra=False</em>, <em class="sig-param">cast_dtype=False</em>, <em class="sig-param">dtype_source='current'</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.Loss.load_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Load parameters from file previously saved by <cite>save_parameters</cite>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>filename</strong> (<em>str</em>) – Path to parameter file.</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em> or </em><em>list of Device</em><em>, </em><em>default cpu</em><em>(</em><em>)</em>) – Device(s) to initialize loaded parameters on.</p></li>
<li><p><strong>allow_missing</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently skip loading parameters not represents in the file.</p></li>
<li><p><strong>ignore_extra</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently ignore parameters from the file that are not
present in this Block.</p></li>
<li><p><strong>cast_dtype</strong> (<em>bool</em><em>, </em><em>default False</em>) – Cast the data type of the NDArray loaded from the checkpoint to the dtype
provided by the Parameter if any.</p></li>
<li><p><strong>dtype_source</strong> (<em>str</em><em>, </em><em>default 'current'</em>) – must be in {‘current’, ‘saved’}
Only valid if cast_dtype=True, specify the source of the dtype for casting
the parameters</p></li>
</ul>
</dd>
</dl>
<p class="rubric">References</p>
<p><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.Loss.optimize_for">
<code class="sig-name descname">optimize_for</code><span class="sig-paren">(</span><em class="sig-param">x</em>, <em class="sig-param">*args</em>, <em class="sig-param">backend=None</em>, <em class="sig-param">clear=False</em>, <em class="sig-param">partition_if_dynamic=True</em>, <em class="sig-param">static_alloc=False</em>, <em class="sig-param">static_shape=False</em>, <em class="sig-param">inline_limit=2</em>, <em class="sig-param">forward_bulk_size=None</em>, <em class="sig-param">backward_bulk_size=None</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.Loss.optimize_for" title="Permalink to this definition"></a></dt>
<dd><p>Partitions the current HybridBlock and optimizes it for a given backend
without executing a forward pass. Modifies the HybridBlock in-place.</p>
<p>Immediately partitions a HybridBlock using the specified backend. Combines
the work done in the hybridize API with part of the work done in the forward
pass without calling the CachedOp. Can be used in place of hybridize,
afterwards <cite>export</cite> can be called or inference can be run. See README.md in
example/extensions/lib_subgraph/README.md for more details.</p>
<p class="rubric">Examples</p>
<p># partition and then export to file
block.optimize_for(x, backend=’myPart’)
block.export(‘partitioned’)</p>
<p># partition and then run inference
block.optimize_for(x, backend=’myPart’)
block(x)</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>x</strong> (<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – first input to model</p></li>
<li><p><strong>*args</strong> (<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – other inputs to model</p></li>
<li><p><strong>backend</strong> (<em>str</em>) – The name of backend, as registered in <cite>SubgraphBackendRegistry</cite>, default None</p></li>
<li><p><strong>backend_opts</strong> (<em>dict of user-specified options to pass to the backend for partitioning</em><em>, </em><em>optional</em>) – Passed on to <cite>PrePartition</cite> and <cite>PostPartition</cite> functions of <cite>SubgraphProperty</cite></p></li>
<li><p><strong>clear</strong> (<em>bool</em><em>, </em><em>default False</em>) – clears any previous optimizations</p></li>
<li><p><strong>partition_if_dynamic</strong> (<em>bool</em><em>, </em><em>default False</em>) – whether to partition the graph when dynamic shape op exists</p></li>
<li><p><strong>static_alloc</strong> (<em>bool</em><em>, </em><em>default False</em>) – Statically allocate memory to improve speed. Memory usage may increase.</p></li>
<li><p><strong>static_shape</strong> (<em>bool</em><em>, </em><em>default False</em>) – Optimize for invariant input shapes between iterations. Must also
set static_alloc to True. Change of input shapes is still allowed
but slower.</p></li>
<li><p><strong>inline_limit</strong> (<em>optional int</em><em>, </em><em>default 2</em>) – Maximum number of operators that can be inlined.</p></li>
<li><p><strong>forward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during forward pass.</p></li>
<li><p><strong>backward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during backward pass.</p></li>
<li><p><strong>**kwargs</strong> (<em>The backend options</em><em>, </em><em>optional</em>) – Passed on to <cite>PrePartition</cite> and <cite>PostPartition</cite> functions of <cite>SubgraphProperty</cite></p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.Loss.params">
<em class="property">property </em><code class="sig-name descname">params</code><a class="headerlink" href="#mxnet.gluon.loss.Loss.params" title="Permalink to this definition"></a></dt>
<dd><p>Returns this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code>’s parameter dictionary (does not include its
children’s parameters).</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.Loss.register_forward_hook">
<code class="sig-name descname">register_forward_hook</code><span class="sig-paren">(</span><em class="sig-param">hook</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.Loss.register_forward_hook" title="Permalink to this definition"></a></dt>
<dd><p>Registers a forward hook on the block.</p>
<p>The hook function is called immediately after <a class="reference internal" href="#mxnet.gluon.loss.Loss.forward" title="mxnet.gluon.loss.Loss.forward"><code class="xref py py-func docutils literal notranslate"><span class="pre">forward()</span></code></a>.
It should not modify the input or output.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>hook</strong> (<em>callable</em>) – The forward hook function of form <cite>hook(block, input, output) -&gt; None</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.utils.HookHandle</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.Loss.register_forward_pre_hook">
<code class="sig-name descname">register_forward_pre_hook</code><span class="sig-paren">(</span><em class="sig-param">hook</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.Loss.register_forward_pre_hook" title="Permalink to this definition"></a></dt>
<dd><p>Registers a forward pre-hook on the block.</p>
<p>The hook function is called immediately before <a class="reference internal" href="#mxnet.gluon.loss.Loss.forward" title="mxnet.gluon.loss.Loss.forward"><code class="xref py py-func docutils literal notranslate"><span class="pre">forward()</span></code></a>.
It should not modify the input or output.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>hook</strong> (<em>callable</em>) – The forward hook function of form <cite>hook(block, input) -&gt; None</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.utils.HookHandle</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.Loss.register_op_hook">
<code class="sig-name descname">register_op_hook</code><span class="sig-paren">(</span><em class="sig-param">callback</em>, <em class="sig-param">monitor_all=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.Loss.register_op_hook" title="Permalink to this definition"></a></dt>
<dd><p>Install op hook for block recursively.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>callback</strong> (<em>function</em>) – Function called to inspect the values of the intermediate outputs
of blocks after hybridization. It takes 3 parameters:
name of the tensor being inspected (str)
name of the operator producing or consuming that tensor (str)
tensor being inspected (NDArray).</p></li>
<li><p><strong>monitor_all</strong> (<em>bool</em><em>, </em><em>default False</em>) – If True, monitor both input and output, otherwise monitor output only.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.Loss.reset_ctx">
<code class="sig-name descname">reset_ctx</code><span class="sig-paren">(</span><em class="sig-param">ctx</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.Loss.reset_ctx" title="Permalink to this definition"></a></dt>
<dd><p>This function has been deprecated. Please refer to <code class="docutils literal notranslate"><span class="pre">HybridBlock.reset_device</span></code>.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.Loss.reset_device">
<code class="sig-name descname">reset_device</code><span class="sig-paren">(</span><em class="sig-param">device</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.Loss.reset_device" title="Permalink to this definition"></a></dt>
<dd><p>Re-assign all Parameters to other devices. If the Block is hybridized, it will reset the _cached_op_args.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>device</strong> (Device or list of Device, default <code class="xref py py-meth docutils literal notranslate"><span class="pre">device.current_device()</span></code>.) – Assign Parameter to given device. If device is a list of Device, a
copy will be made for each device.</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.Loss.save">
<code class="sig-name descname">save</code><span class="sig-paren">(</span><em class="sig-param">prefix</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.Loss.save" title="Permalink to this definition"></a></dt>
<dd><p>Save the model architecture and parameters to load again later</p>
<p>Saves the model architecture as a nested dictionary where each Block
in the model is a dictionary and its children are sub-dictionaries.</p>
<p>Each Block is uniquely identified by Block class name and a unique ID.
We save each Block’s parameter UUID to restore later in order to match
the saved parameters.</p>
<p>Recursively traverses a Block’s children in order (since its an
OrderedDict) and uses the unique ID to denote that specific Block.</p>
<p>Assumes that the model is created in an identical order every time.
If the model is not able to be recreated deterministically do not
use this set of APIs to save/load your model.</p>
<p>For HybridBlocks, the cached_graph is saved (Symbol &amp; inputs) if
it has already been hybridized.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>prefix</strong> (<em>str</em>) – The prefix to use in filenames for saving this model:
&lt;prefix&gt;-model.json and &lt;prefix&gt;-model.params</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.Loss.save_parameters">
<code class="sig-name descname">save_parameters</code><span class="sig-paren">(</span><em class="sig-param">filename</em>, <em class="sig-param">deduplicate=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.Loss.save_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Save parameters to file.</p>
<p>Saved parameters can only be loaded with <cite>load_parameters</cite>. Note that this
method only saves parameters, not model structure. If you want to save
model structures, please use <code class="xref py py-meth docutils literal notranslate"><span class="pre">HybridBlock.export()</span></code>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>filename</strong> (<em>str</em>) – Path to file.</p></li>
<li><p><strong>deduplicate</strong> (<em>bool</em><em>, </em><em>default False</em>) – If True, save shared parameters only once. Otherwise, if a Block
contains multiple sub-blocks that share parameters, each of the
shared parameters will be separately saved for every sub-block.</p></li>
</ul>
</dd>
</dl>
<p class="rubric">References</p>
<p><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.Loss.setattr">
<code class="sig-name descname">setattr</code><span class="sig-paren">(</span><em class="sig-param">name</em>, <em class="sig-param">value</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.Loss.setattr" title="Permalink to this definition"></a></dt>
<dd><p>Set an attribute to a new value for all Parameters.</p>
<p>For example, set grad_req to null if you don’t need gradient w.r.t a
model’s Parameters:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">setattr</span><span class="p">(</span><span class="s1">&#39;grad_req&#39;</span><span class="p">,</span> <span class="s1">&#39;null&#39;</span><span class="p">)</span>
</pre></div>
</div>
<p>or change the learning rate multiplier:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">setattr</span><span class="p">(</span><span class="s1">&#39;lr_mult&#39;</span><span class="p">,</span> <span class="mf">0.5</span><span class="p">)</span>
</pre></div>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>name</strong> (<em>str</em>) – Name of the attribute.</p></li>
<li><p><strong>value</strong> (<em>valid type for attribute name</em>) – The new value for the attribute.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.Loss.share_parameters">
<code class="sig-name descname">share_parameters</code><span class="sig-paren">(</span><em class="sig-param">shared</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.Loss.share_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Share parameters recursively inside the model.</p>
<p>For example, if you want <code class="docutils literal notranslate"><span class="pre">dense1</span></code> to share <code class="docutils literal notranslate"><span class="pre">dense0</span></code>’s weights, you can do:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">dense0</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">)</span>
<span class="n">dense1</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">)</span>
<span class="n">dense1</span><span class="o">.</span><span class="n">share_parameters</span><span class="p">(</span><span class="n">dense0</span><span class="o">.</span><span class="n">collect_params</span><span class="p">())</span>
</pre></div>
</div>
<dl class="simple">
<dt>which equals to</dt><dd><p>dense1.weight = dense0.weight
dense1.bias = dense0.bias</p>
</dd>
</dl>
<p>Note that unlike the <cite>load_parameters</cite> or <cite>load_dict</cite> functions,
<cite>share_parameters</cite> results in the <cite>Parameter</cite> object being shared (or
tied) between the models, whereas <cite>load_parameters</cite> or <cite>load_dict</cite> only
set the value of the data dictionary of a model. If you call
<cite>load_parameters</cite> or <cite>load_dict</cite> after <cite>share_parameters</cite>, the loaded
value will be reflected in all networks that use the shared (or tied)
<cite>Parameter</cite> object.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>shared</strong> (<em>Dict</em>) – Dict of the shared parameters.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>this block</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.Loss.summary">
<code class="sig-name descname">summary</code><span class="sig-paren">(</span><em class="sig-param">*inputs</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.Loss.summary" title="Permalink to this definition"></a></dt>
<dd><p>Print the summary of the model’s output and parameters.</p>
<p>The network must have been initialized, and must not have been hybridized.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>inputs</strong> (<em>object</em>) – Any input that the model supports. For any tensor in the input, only
<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.ndarray.NDArray</span></code></a> is supported.</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.Loss.zero_grad">
<code class="sig-name descname">zero_grad</code><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.Loss.zero_grad" title="Permalink to this definition"></a></dt>
<dd><p>Sets all Parameters’ gradient buffer to 0.</p>
</dd></dl>
</dd></dl>
<dl class="class">
<dt id="mxnet.gluon.loss.L2Loss">
<em class="property">class </em><code class="sig-name descname">L2Loss</code><span class="sig-paren">(</span><em class="sig-param">weight=1.0</em>, <em class="sig-param">batch_axis=0</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#L2Loss"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.L2Loss" title="Permalink to this definition"></a></dt>
<dd><p>Bases: <a class="reference internal" href="#mxnet.gluon.loss.Loss" title="mxnet.gluon.loss.Loss"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.loss.Loss</span></code></a></p>
<p>Calculates the mean squared error between <cite>label</cite> and <cite>pred</cite>.</p>
<div class="math notranslate nohighlight">
\[L = \frac{1}{2} \sum_i \vert {label}_i - {pred}_i \vert^2.\]</div>
<p><strong>Methods</strong></p>
<table class="longtable docutils align-default">
<colgroup>
<col style="width: 10%" />
<col style="width: 90%" />
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L2Loss.apply" title="mxnet.gluon.loss.L2Loss.apply"><code class="xref py py-obj docutils literal notranslate"><span class="pre">apply</span></code></a>(fn)</p></td>
<td><p>Applies <code class="docutils literal notranslate"><span class="pre">fn</span></code> recursively to every child block as well as self.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L2Loss.collect_params" title="mxnet.gluon.loss.L2Loss.collect_params"><code class="xref py py-obj docutils literal notranslate"><span class="pre">collect_params</span></code></a>([select])</p></td>
<td><p>Returns a <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> containing this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and all of its children’s Parameters(default), also can returns the select <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> which match some given regular expressions.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L2Loss.export" title="mxnet.gluon.loss.L2Loss.export"><code class="xref py py-obj docutils literal notranslate"><span class="pre">export</span></code></a>(path[, epoch, remove_amp_cast])</p></td>
<td><p>Export HybridBlock to json format that can be loaded by <cite>gluon.SymbolBlock.imports</cite> or the C++ interface.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L2Loss.forward" title="mxnet.gluon.loss.L2Loss.forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">forward</span></code></a>(pred, label[, sample_weight])</p></td>
<td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L2Loss.hybridize" title="mxnet.gluon.loss.L2Loss.hybridize"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybridize</span></code></a>([active, partition_if_dynamic, …])</p></td>
<td><p>Activates or deactivates <code class="xref py py-class docutils literal notranslate"><span class="pre">HybridBlock</span></code> s recursively.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L2Loss.infer_shape" title="mxnet.gluon.loss.L2Loss.infer_shape"><code class="xref py py-obj docutils literal notranslate"><span class="pre">infer_shape</span></code></a>(*args)</p></td>
<td><p>Infers shape of Parameters from inputs.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L2Loss.infer_type" title="mxnet.gluon.loss.L2Loss.infer_type"><code class="xref py py-obj docutils literal notranslate"><span class="pre">infer_type</span></code></a>(*args)</p></td>
<td><p>Infers data type of Parameters from inputs.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L2Loss.initialize" title="mxnet.gluon.loss.L2Loss.initialize"><code class="xref py py-obj docutils literal notranslate"><span class="pre">initialize</span></code></a>([init, device, verbose, force_reinit])</p></td>
<td><p>Initializes <code class="xref py py-class docutils literal notranslate"><span class="pre">Parameter</span></code> s of this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and its children.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L2Loss.load" title="mxnet.gluon.loss.L2Loss.load"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load</span></code></a>(prefix)</p></td>
<td><p>Load a model saved using the <cite>save</cite> API</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L2Loss.load_dict" title="mxnet.gluon.loss.L2Loss.load_dict"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load_dict</span></code></a>(param_dict[, device, …])</p></td>
<td><p>Load parameters from dict</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L2Loss.load_parameters" title="mxnet.gluon.loss.L2Loss.load_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load_parameters</span></code></a>(filename[, device, …])</p></td>
<td><p>Load parameters from file previously saved by <cite>save_parameters</cite>.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L2Loss.optimize_for" title="mxnet.gluon.loss.L2Loss.optimize_for"><code class="xref py py-obj docutils literal notranslate"><span class="pre">optimize_for</span></code></a>(x, *args[, backend, clear, …])</p></td>
<td><p>Partitions the current HybridBlock and optimizes it for a given backend without executing a forward pass.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L2Loss.register_forward_hook" title="mxnet.gluon.loss.L2Loss.register_forward_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_forward_hook</span></code></a>(hook)</p></td>
<td><p>Registers a forward hook on the block.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L2Loss.register_forward_pre_hook" title="mxnet.gluon.loss.L2Loss.register_forward_pre_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_forward_pre_hook</span></code></a>(hook)</p></td>
<td><p>Registers a forward pre-hook on the block.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L2Loss.register_op_hook" title="mxnet.gluon.loss.L2Loss.register_op_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_op_hook</span></code></a>(callback[, monitor_all])</p></td>
<td><p>Install op hook for block recursively.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L2Loss.reset_ctx" title="mxnet.gluon.loss.L2Loss.reset_ctx"><code class="xref py py-obj docutils literal notranslate"><span class="pre">reset_ctx</span></code></a>(ctx)</p></td>
<td><p>This function has been deprecated.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L2Loss.reset_device" title="mxnet.gluon.loss.L2Loss.reset_device"><code class="xref py py-obj docutils literal notranslate"><span class="pre">reset_device</span></code></a>(device)</p></td>
<td><p>Re-assign all Parameters to other devices.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L2Loss.save" title="mxnet.gluon.loss.L2Loss.save"><code class="xref py py-obj docutils literal notranslate"><span class="pre">save</span></code></a>(prefix)</p></td>
<td><p>Save the model architecture and parameters to load again later</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L2Loss.save_parameters" title="mxnet.gluon.loss.L2Loss.save_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">save_parameters</span></code></a>(filename[, deduplicate])</p></td>
<td><p>Save parameters to file.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L2Loss.setattr" title="mxnet.gluon.loss.L2Loss.setattr"><code class="xref py py-obj docutils literal notranslate"><span class="pre">setattr</span></code></a>(name, value)</p></td>
<td><p>Set an attribute to a new value for all Parameters.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L2Loss.share_parameters" title="mxnet.gluon.loss.L2Loss.share_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">share_parameters</span></code></a>(shared)</p></td>
<td><p>Share parameters recursively inside the model.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L2Loss.summary" title="mxnet.gluon.loss.L2Loss.summary"><code class="xref py py-obj docutils literal notranslate"><span class="pre">summary</span></code></a>(*inputs)</p></td>
<td><p>Print the summary of the model’s output and parameters.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L2Loss.zero_grad" title="mxnet.gluon.loss.L2Loss.zero_grad"><code class="xref py py-obj docutils literal notranslate"><span class="pre">zero_grad</span></code></a>()</p></td>
<td><p>Sets all Parameters’ gradient buffer to 0.</p></td>
</tr>
</tbody>
</table>
<p><strong>Attributes</strong></p>
<table class="longtable docutils align-default">
<colgroup>
<col style="width: 10%" />
<col style="width: 90%" />
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L2Loss.params" title="mxnet.gluon.loss.L2Loss.params"><code class="xref py py-obj docutils literal notranslate"><span class="pre">params</span></code></a></p></td>
<td><p>Returns this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code>’s parameter dictionary (does not include its children’s parameters).</p></td>
</tr>
</tbody>
</table>
<p><cite>label</cite> and <cite>pred</cite> can have arbitrary shape as long as they have the same
number of elements.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>weight</strong> (<em>float</em><em> or </em><em>None</em>) – Global scalar weight for loss.</p></li>
<li><p><strong>batch_axis</strong> (<em>int</em><em>, </em><em>default 0</em>) – The axis that represents mini-batch.</p></li>
</ul>
</dd>
</dl>
<dl class="simple">
<dt>Inputs:</dt><dd><ul class="simple">
<li><p><strong>pred</strong>: prediction tensor with arbitrary shape</p></li>
<li><p><strong>label</strong>: target tensor with the same size as pred.</p></li>
<li><p><strong>sample_weight</strong>: element-wise weighting tensor. Must be broadcastable
to the same shape as pred. For example, if pred has shape (64, 10)
and you want to weigh each sample in the batch separately,
sample_weight should have shape (64, 1).</p></li>
</ul>
</dd>
<dt>Outputs:</dt><dd><ul class="simple">
<li><p><strong>loss</strong>: loss tensor with shape (batch_size,). Dimenions other than
batch_axis are averaged out.</p></li>
</ul>
</dd>
</dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L2Loss.apply">
<code class="sig-name descname">apply</code><span class="sig-paren">(</span><em class="sig-param">fn</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L2Loss.apply" title="Permalink to this definition"></a></dt>
<dd><p>Applies <code class="docutils literal notranslate"><span class="pre">fn</span></code> recursively to every child block as well as self.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>fn</strong> (<em>callable</em>) – Function to be applied to each submodule, of form <cite>fn(block)</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>this block</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L2Loss.collect_params">
<code class="sig-name descname">collect_params</code><span class="sig-paren">(</span><em class="sig-param">select=None</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L2Loss.collect_params" title="Permalink to this definition"></a></dt>
<dd><p>Returns a <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> containing this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and all of its
children’s Parameters(default), also can returns the select <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code>
which match some given regular expressions.</p>
<p>For example, collect the specified parameters in [‘conv1.weight’, ‘conv1.bias’, ‘fc.weight’,
‘fc.bias’]:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">collect_params</span><span class="p">(</span><span class="s1">&#39;conv1.weight|conv1.bias|fc.weight|fc.bias&#39;</span><span class="p">)</span>
</pre></div>
</div>
<p>or collect all parameters whose names end with ‘weight’ or ‘bias’, this can be done
using regular expressions:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">collect_params</span><span class="p">(</span><span class="s1">&#39;.*weight|.*bias&#39;</span><span class="p">)</span>
</pre></div>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>select</strong> (<em>str</em>) – regular expressions</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>The selected <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L2Loss.export">
<code class="sig-name descname">export</code><span class="sig-paren">(</span><em class="sig-param">path</em>, <em class="sig-param">epoch=0</em>, <em class="sig-param">remove_amp_cast=True</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L2Loss.export" title="Permalink to this definition"></a></dt>
<dd><p>Export HybridBlock to json format that can be loaded by
<cite>gluon.SymbolBlock.imports</cite> or the C++ interface.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>When there are only one input, it will have name <cite>data</cite>. When there
Are more than one inputs, they will be named as <cite>data0</cite>, <cite>data1</cite>, etc.</p>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>path</strong> (<em>str</em><em> or </em><em>None</em>) – Path to save model. Two files <cite>path-symbol.json</cite> and <cite>path-xxxx.params</cite>
will be created, where xxxx is the 4 digits epoch number.
If None, do not export to file but return Python Symbol object and
corresponding dictionary of parameters.</p></li>
<li><p><strong>epoch</strong> (<em>int</em>) – Epoch number of saved model.</p></li>
<li><p><strong>remove_amp_cast</strong> (<em>bool</em><em>, </em><em>optional</em>) – Whether to remove the amp_cast and amp_multicast operators, before saving the model.</p></li>
</ul>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p><ul class="simple">
<li><p><strong>symbol_filename</strong> (<em>str</em>) – Filename to which model symbols were saved, including <cite>path</cite> prefix.</p></li>
<li><p><strong>params_filename</strong> (<em>str</em>) – Filename to which model parameters were saved, including <cite>path</cite> prefix.</p></li>
</ul>
</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L2Loss.forward">
<code class="sig-name descname">forward</code><span class="sig-paren">(</span><em class="sig-param">pred</em>, <em class="sig-param">label</em>, <em class="sig-param">sample_weight=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#L2Loss.forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.L2Loss.forward" title="Permalink to this definition"></a></dt>
<dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>x</strong> (<a class="reference internal" href="../../legacy/symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li>
<li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L2Loss.hybridize">
<code class="sig-name descname">hybridize</code><span class="sig-paren">(</span><em class="sig-param">active=True</em>, <em class="sig-param">partition_if_dynamic=True</em>, <em class="sig-param">static_alloc=False</em>, <em class="sig-param">static_shape=False</em>, <em class="sig-param">inline_limit=2</em>, <em class="sig-param">forward_bulk_size=None</em>, <em class="sig-param">backward_bulk_size=None</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L2Loss.hybridize" title="Permalink to this definition"></a></dt>
<dd><p>Activates or deactivates <code class="xref py py-class docutils literal notranslate"><span class="pre">HybridBlock</span></code> s recursively. Has no effect on
non-hybrid children.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>active</strong> (<em>bool</em><em>, </em><em>default True</em>) – Whether to turn hybrid on or off.</p></li>
<li><p><strong>partition_if_dynamic</strong> (<em>bool</em><em>, </em><em>default False</em>) – whether to partition the graph when dynamic shape op exists</p></li>
<li><p><strong>static_alloc</strong> (<em>bool</em><em>, </em><em>default False</em>) – Statically allocate memory to improve speed. Memory usage may increase.</p></li>
<li><p><strong>static_shape</strong> (<em>bool</em><em>, </em><em>default False</em>) – Optimize for invariant input shapes between iterations. Must also
set static_alloc to True. Change of input shapes is still allowed
but slower.</p></li>
<li><p><strong>inline_limit</strong> (<em>optional int</em><em>, </em><em>default 2</em>) – Maximum number of operators that can be inlined.</p></li>
<li><p><strong>forward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during forward pass.</p></li>
<li><p><strong>backward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during backward pass.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L2Loss.infer_shape">
<code class="sig-name descname">infer_shape</code><span class="sig-paren">(</span><em class="sig-param">*args</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L2Loss.infer_shape" title="Permalink to this definition"></a></dt>
<dd><p>Infers shape of Parameters from inputs.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L2Loss.infer_type">
<code class="sig-name descname">infer_type</code><span class="sig-paren">(</span><em class="sig-param">*args</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L2Loss.infer_type" title="Permalink to this definition"></a></dt>
<dd><p>Infers data type of Parameters from inputs.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L2Loss.initialize">
<code class="sig-name descname">initialize</code><span class="sig-paren">(</span><em class="sig-param">init=&lt;mxnet.initializer.Uniform object&gt;</em>, <em class="sig-param">device=None</em>, <em class="sig-param">verbose=False</em>, <em class="sig-param">force_reinit=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L2Loss.initialize" title="Permalink to this definition"></a></dt>
<dd><p>Initializes <code class="xref py py-class docutils literal notranslate"><span class="pre">Parameter</span></code> s of this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and its children.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>init</strong> (<a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Global default Initializer to be used when <code class="xref py py-meth docutils literal notranslate"><span class="pre">Parameter.init()</span></code> is <code class="docutils literal notranslate"><span class="pre">None</span></code>.
Otherwise, <code class="xref py py-meth docutils literal notranslate"><span class="pre">Parameter.init()</span></code> takes precedence.</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em> or </em><em>list of Device</em>) – Keeps a copy of Parameters on one or many device(s).</p></li>
<li><p><strong>verbose</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to verbosely print out details on initialization.</p></li>
<li><p><strong>force_reinit</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to force re-initialization if parameter is already initialized.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L2Loss.load">
<code class="sig-name descname">load</code><span class="sig-paren">(</span><em class="sig-param">prefix</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L2Loss.load" title="Permalink to this definition"></a></dt>
<dd><p>Load a model saved using the <cite>save</cite> API</p>
<p>Reconfigures a model using the saved configuration. This function
does not regenerate the model architecture. It resets each Block’s
parameter UUIDs as they were when saved in order to match the names of the
saved parameters.</p>
<p>This function assumes the Blocks in the model were created in the same
order they were when the model was saved. This is because each Block is
uniquely identified by Block class name and a unique ID in order (since
its an OrderedDict) and uses the unique ID to denote that specific Block.</p>
<p>Assumes that the model is created in an identical order every time.
If the model is not able to be recreated deterministically do not
use this set of APIs to save/load your model.</p>
<p>For HybridBlocks, the cached_graph (Symbol &amp; inputs) and settings are
restored if it had been hybridized before saving.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>prefix</strong> (<em>str</em>) – The prefix to use in filenames for loading this model:
&lt;prefix&gt;-model.json and &lt;prefix&gt;-model.params</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L2Loss.load_dict">
<code class="sig-name descname">load_dict</code><span class="sig-paren">(</span><em class="sig-param">param_dict</em>, <em class="sig-param">device=None</em>, <em class="sig-param">allow_missing=False</em>, <em class="sig-param">ignore_extra=False</em>, <em class="sig-param">cast_dtype=False</em>, <em class="sig-param">dtype_source='current'</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L2Loss.load_dict" title="Permalink to this definition"></a></dt>
<dd><p>Load parameters from dict</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>param_dict</strong> (<em>dict</em>) – Dictionary containing model parameters</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em>, </em><em>optional</em>) – Device context on which the memory is allocated. Default is
<cite>mxnet.device.current_device()</cite>.</p></li>
<li><p><strong>allow_missing</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently skip loading parameters not represented in the file.</p></li>
<li><p><strong>ignore_extra</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently ignore parameters from the file that are not
present in this dict.</p></li>
<li><p><strong>cast_dtype</strong> (<em>bool</em><em>, </em><em>default False</em>) – Cast the data type of the NDArray loaded from the checkpoint to the dtype
provided by the Parameter if any</p></li>
<li><p><strong>dtype_source</strong> (<em>str</em><em>, </em><em>default 'current'</em>) – must be in {‘current’, ‘saved’}
Only valid if cast_dtype=True, specify the source of the dtype for casting
the parameters</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L2Loss.load_parameters">
<code class="sig-name descname">load_parameters</code><span class="sig-paren">(</span><em class="sig-param">filename</em>, <em class="sig-param">device=None</em>, <em class="sig-param">allow_missing=False</em>, <em class="sig-param">ignore_extra=False</em>, <em class="sig-param">cast_dtype=False</em>, <em class="sig-param">dtype_source='current'</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L2Loss.load_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Load parameters from file previously saved by <cite>save_parameters</cite>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>filename</strong> (<em>str</em>) – Path to parameter file.</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em> or </em><em>list of Device</em><em>, </em><em>default cpu</em><em>(</em><em>)</em>) – Device(s) to initialize loaded parameters on.</p></li>
<li><p><strong>allow_missing</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently skip loading parameters not represents in the file.</p></li>
<li><p><strong>ignore_extra</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently ignore parameters from the file that are not
present in this Block.</p></li>
<li><p><strong>cast_dtype</strong> (<em>bool</em><em>, </em><em>default False</em>) – Cast the data type of the NDArray loaded from the checkpoint to the dtype
provided by the Parameter if any.</p></li>
<li><p><strong>dtype_source</strong> (<em>str</em><em>, </em><em>default 'current'</em>) – must be in {‘current’, ‘saved’}
Only valid if cast_dtype=True, specify the source of the dtype for casting
the parameters</p></li>
</ul>
</dd>
</dl>
<p class="rubric">References</p>
<p><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L2Loss.optimize_for">
<code class="sig-name descname">optimize_for</code><span class="sig-paren">(</span><em class="sig-param">x</em>, <em class="sig-param">*args</em>, <em class="sig-param">backend=None</em>, <em class="sig-param">clear=False</em>, <em class="sig-param">partition_if_dynamic=True</em>, <em class="sig-param">static_alloc=False</em>, <em class="sig-param">static_shape=False</em>, <em class="sig-param">inline_limit=2</em>, <em class="sig-param">forward_bulk_size=None</em>, <em class="sig-param">backward_bulk_size=None</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L2Loss.optimize_for" title="Permalink to this definition"></a></dt>
<dd><p>Partitions the current HybridBlock and optimizes it for a given backend
without executing a forward pass. Modifies the HybridBlock in-place.</p>
<p>Immediately partitions a HybridBlock using the specified backend. Combines
the work done in the hybridize API with part of the work done in the forward
pass without calling the CachedOp. Can be used in place of hybridize,
afterwards <cite>export</cite> can be called or inference can be run. See README.md in
example/extensions/lib_subgraph/README.md for more details.</p>
<p class="rubric">Examples</p>
<p># partition and then export to file
block.optimize_for(x, backend=’myPart’)
block.export(‘partitioned’)</p>
<p># partition and then run inference
block.optimize_for(x, backend=’myPart’)
block(x)</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>x</strong> (<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – first input to model</p></li>
<li><p><strong>*args</strong> (<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – other inputs to model</p></li>
<li><p><strong>backend</strong> (<em>str</em>) – The name of backend, as registered in <cite>SubgraphBackendRegistry</cite>, default None</p></li>
<li><p><strong>backend_opts</strong> (<em>dict of user-specified options to pass to the backend for partitioning</em><em>, </em><em>optional</em>) – Passed on to <cite>PrePartition</cite> and <cite>PostPartition</cite> functions of <cite>SubgraphProperty</cite></p></li>
<li><p><strong>clear</strong> (<em>bool</em><em>, </em><em>default False</em>) – clears any previous optimizations</p></li>
<li><p><strong>partition_if_dynamic</strong> (<em>bool</em><em>, </em><em>default False</em>) – whether to partition the graph when dynamic shape op exists</p></li>
<li><p><strong>static_alloc</strong> (<em>bool</em><em>, </em><em>default False</em>) – Statically allocate memory to improve speed. Memory usage may increase.</p></li>
<li><p><strong>static_shape</strong> (<em>bool</em><em>, </em><em>default False</em>) – Optimize for invariant input shapes between iterations. Must also
set static_alloc to True. Change of input shapes is still allowed
but slower.</p></li>
<li><p><strong>inline_limit</strong> (<em>optional int</em><em>, </em><em>default 2</em>) – Maximum number of operators that can be inlined.</p></li>
<li><p><strong>forward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during forward pass.</p></li>
<li><p><strong>backward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during backward pass.</p></li>
<li><p><strong>**kwargs</strong> (<em>The backend options</em><em>, </em><em>optional</em>) – Passed on to <cite>PrePartition</cite> and <cite>PostPartition</cite> functions of <cite>SubgraphProperty</cite></p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L2Loss.params">
<em class="property">property </em><code class="sig-name descname">params</code><a class="headerlink" href="#mxnet.gluon.loss.L2Loss.params" title="Permalink to this definition"></a></dt>
<dd><p>Returns this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code>’s parameter dictionary (does not include its
children’s parameters).</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L2Loss.register_forward_hook">
<code class="sig-name descname">register_forward_hook</code><span class="sig-paren">(</span><em class="sig-param">hook</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L2Loss.register_forward_hook" title="Permalink to this definition"></a></dt>
<dd><p>Registers a forward hook on the block.</p>
<p>The hook function is called immediately after <a class="reference internal" href="#mxnet.gluon.loss.L2Loss.forward" title="mxnet.gluon.loss.L2Loss.forward"><code class="xref py py-func docutils literal notranslate"><span class="pre">forward()</span></code></a>.
It should not modify the input or output.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>hook</strong> (<em>callable</em>) – The forward hook function of form <cite>hook(block, input, output) -&gt; None</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.utils.HookHandle</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L2Loss.register_forward_pre_hook">
<code class="sig-name descname">register_forward_pre_hook</code><span class="sig-paren">(</span><em class="sig-param">hook</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L2Loss.register_forward_pre_hook" title="Permalink to this definition"></a></dt>
<dd><p>Registers a forward pre-hook on the block.</p>
<p>The hook function is called immediately before <a class="reference internal" href="#mxnet.gluon.loss.L2Loss.forward" title="mxnet.gluon.loss.L2Loss.forward"><code class="xref py py-func docutils literal notranslate"><span class="pre">forward()</span></code></a>.
It should not modify the input or output.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>hook</strong> (<em>callable</em>) – The forward hook function of form <cite>hook(block, input) -&gt; None</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.utils.HookHandle</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L2Loss.register_op_hook">
<code class="sig-name descname">register_op_hook</code><span class="sig-paren">(</span><em class="sig-param">callback</em>, <em class="sig-param">monitor_all=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L2Loss.register_op_hook" title="Permalink to this definition"></a></dt>
<dd><p>Install op hook for block recursively.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>callback</strong> (<em>function</em>) – Function called to inspect the values of the intermediate outputs
of blocks after hybridization. It takes 3 parameters:
name of the tensor being inspected (str)
name of the operator producing or consuming that tensor (str)
tensor being inspected (NDArray).</p></li>
<li><p><strong>monitor_all</strong> (<em>bool</em><em>, </em><em>default False</em>) – If True, monitor both input and output, otherwise monitor output only.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L2Loss.reset_ctx">
<code class="sig-name descname">reset_ctx</code><span class="sig-paren">(</span><em class="sig-param">ctx</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L2Loss.reset_ctx" title="Permalink to this definition"></a></dt>
<dd><p>This function has been deprecated. Please refer to <code class="docutils literal notranslate"><span class="pre">HybridBlock.reset_device</span></code>.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L2Loss.reset_device">
<code class="sig-name descname">reset_device</code><span class="sig-paren">(</span><em class="sig-param">device</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L2Loss.reset_device" title="Permalink to this definition"></a></dt>
<dd><p>Re-assign all Parameters to other devices. If the Block is hybridized, it will reset the _cached_op_args.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>device</strong> (Device or list of Device, default <code class="xref py py-meth docutils literal notranslate"><span class="pre">device.current_device()</span></code>.) – Assign Parameter to given device. If device is a list of Device, a
copy will be made for each device.</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L2Loss.save">
<code class="sig-name descname">save</code><span class="sig-paren">(</span><em class="sig-param">prefix</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L2Loss.save" title="Permalink to this definition"></a></dt>
<dd><p>Save the model architecture and parameters to load again later</p>
<p>Saves the model architecture as a nested dictionary where each Block
in the model is a dictionary and its children are sub-dictionaries.</p>
<p>Each Block is uniquely identified by Block class name and a unique ID.
We save each Block’s parameter UUID to restore later in order to match
the saved parameters.</p>
<p>Recursively traverses a Block’s children in order (since its an
OrderedDict) and uses the unique ID to denote that specific Block.</p>
<p>Assumes that the model is created in an identical order every time.
If the model is not able to be recreated deterministically do not
use this set of APIs to save/load your model.</p>
<p>For HybridBlocks, the cached_graph is saved (Symbol &amp; inputs) if
it has already been hybridized.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>prefix</strong> (<em>str</em>) – The prefix to use in filenames for saving this model:
&lt;prefix&gt;-model.json and &lt;prefix&gt;-model.params</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L2Loss.save_parameters">
<code class="sig-name descname">save_parameters</code><span class="sig-paren">(</span><em class="sig-param">filename</em>, <em class="sig-param">deduplicate=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L2Loss.save_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Save parameters to file.</p>
<p>Saved parameters can only be loaded with <cite>load_parameters</cite>. Note that this
method only saves parameters, not model structure. If you want to save
model structures, please use <code class="xref py py-meth docutils literal notranslate"><span class="pre">HybridBlock.export()</span></code>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>filename</strong> (<em>str</em>) – Path to file.</p></li>
<li><p><strong>deduplicate</strong> (<em>bool</em><em>, </em><em>default False</em>) – If True, save shared parameters only once. Otherwise, if a Block
contains multiple sub-blocks that share parameters, each of the
shared parameters will be separately saved for every sub-block.</p></li>
</ul>
</dd>
</dl>
<p class="rubric">References</p>
<p><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L2Loss.setattr">
<code class="sig-name descname">setattr</code><span class="sig-paren">(</span><em class="sig-param">name</em>, <em class="sig-param">value</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L2Loss.setattr" title="Permalink to this definition"></a></dt>
<dd><p>Set an attribute to a new value for all Parameters.</p>
<p>For example, set grad_req to null if you don’t need gradient w.r.t a
model’s Parameters:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">setattr</span><span class="p">(</span><span class="s1">&#39;grad_req&#39;</span><span class="p">,</span> <span class="s1">&#39;null&#39;</span><span class="p">)</span>
</pre></div>
</div>
<p>or change the learning rate multiplier:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">setattr</span><span class="p">(</span><span class="s1">&#39;lr_mult&#39;</span><span class="p">,</span> <span class="mf">0.5</span><span class="p">)</span>
</pre></div>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>name</strong> (<em>str</em>) – Name of the attribute.</p></li>
<li><p><strong>value</strong> (<em>valid type for attribute name</em>) – The new value for the attribute.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L2Loss.share_parameters">
<code class="sig-name descname">share_parameters</code><span class="sig-paren">(</span><em class="sig-param">shared</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L2Loss.share_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Share parameters recursively inside the model.</p>
<p>For example, if you want <code class="docutils literal notranslate"><span class="pre">dense1</span></code> to share <code class="docutils literal notranslate"><span class="pre">dense0</span></code>’s weights, you can do:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">dense0</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">)</span>
<span class="n">dense1</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">)</span>
<span class="n">dense1</span><span class="o">.</span><span class="n">share_parameters</span><span class="p">(</span><span class="n">dense0</span><span class="o">.</span><span class="n">collect_params</span><span class="p">())</span>
</pre></div>
</div>
<dl class="simple">
<dt>which equals to</dt><dd><p>dense1.weight = dense0.weight
dense1.bias = dense0.bias</p>
</dd>
</dl>
<p>Note that unlike the <cite>load_parameters</cite> or <cite>load_dict</cite> functions,
<cite>share_parameters</cite> results in the <cite>Parameter</cite> object being shared (or
tied) between the models, whereas <cite>load_parameters</cite> or <cite>load_dict</cite> only
set the value of the data dictionary of a model. If you call
<cite>load_parameters</cite> or <cite>load_dict</cite> after <cite>share_parameters</cite>, the loaded
value will be reflected in all networks that use the shared (or tied)
<cite>Parameter</cite> object.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>shared</strong> (<em>Dict</em>) – Dict of the shared parameters.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>this block</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L2Loss.summary">
<code class="sig-name descname">summary</code><span class="sig-paren">(</span><em class="sig-param">*inputs</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L2Loss.summary" title="Permalink to this definition"></a></dt>
<dd><p>Print the summary of the model’s output and parameters.</p>
<p>The network must have been initialized, and must not have been hybridized.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>inputs</strong> (<em>object</em>) – Any input that the model supports. For any tensor in the input, only
<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.ndarray.NDArray</span></code></a> is supported.</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L2Loss.zero_grad">
<code class="sig-name descname">zero_grad</code><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L2Loss.zero_grad" title="Permalink to this definition"></a></dt>
<dd><p>Sets all Parameters’ gradient buffer to 0.</p>
</dd></dl>
</dd></dl>
<dl class="class">
<dt id="mxnet.gluon.loss.L1Loss">
<em class="property">class </em><code class="sig-name descname">L1Loss</code><span class="sig-paren">(</span><em class="sig-param">weight=None</em>, <em class="sig-param">batch_axis=0</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#L1Loss"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.L1Loss" title="Permalink to this definition"></a></dt>
<dd><p>Bases: <a class="reference internal" href="#mxnet.gluon.loss.Loss" title="mxnet.gluon.loss.Loss"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.loss.Loss</span></code></a></p>
<p>Calculates the mean absolute error between <cite>label</cite> and <cite>pred</cite>.</p>
<div class="math notranslate nohighlight">
\[L = \sum_i \vert {label}_i - {pred}_i \vert.\]</div>
<p><strong>Methods</strong></p>
<table class="longtable docutils align-default">
<colgroup>
<col style="width: 10%" />
<col style="width: 90%" />
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L1Loss.apply" title="mxnet.gluon.loss.L1Loss.apply"><code class="xref py py-obj docutils literal notranslate"><span class="pre">apply</span></code></a>(fn)</p></td>
<td><p>Applies <code class="docutils literal notranslate"><span class="pre">fn</span></code> recursively to every child block as well as self.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L1Loss.collect_params" title="mxnet.gluon.loss.L1Loss.collect_params"><code class="xref py py-obj docutils literal notranslate"><span class="pre">collect_params</span></code></a>([select])</p></td>
<td><p>Returns a <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> containing this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and all of its children’s Parameters(default), also can returns the select <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> which match some given regular expressions.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L1Loss.export" title="mxnet.gluon.loss.L1Loss.export"><code class="xref py py-obj docutils literal notranslate"><span class="pre">export</span></code></a>(path[, epoch, remove_amp_cast])</p></td>
<td><p>Export HybridBlock to json format that can be loaded by <cite>gluon.SymbolBlock.imports</cite> or the C++ interface.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L1Loss.forward" title="mxnet.gluon.loss.L1Loss.forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">forward</span></code></a>(pred, label[, sample_weight])</p></td>
<td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L1Loss.hybridize" title="mxnet.gluon.loss.L1Loss.hybridize"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybridize</span></code></a>([active, partition_if_dynamic, …])</p></td>
<td><p>Activates or deactivates <code class="xref py py-class docutils literal notranslate"><span class="pre">HybridBlock</span></code> s recursively.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L1Loss.infer_shape" title="mxnet.gluon.loss.L1Loss.infer_shape"><code class="xref py py-obj docutils literal notranslate"><span class="pre">infer_shape</span></code></a>(*args)</p></td>
<td><p>Infers shape of Parameters from inputs.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L1Loss.infer_type" title="mxnet.gluon.loss.L1Loss.infer_type"><code class="xref py py-obj docutils literal notranslate"><span class="pre">infer_type</span></code></a>(*args)</p></td>
<td><p>Infers data type of Parameters from inputs.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L1Loss.initialize" title="mxnet.gluon.loss.L1Loss.initialize"><code class="xref py py-obj docutils literal notranslate"><span class="pre">initialize</span></code></a>([init, device, verbose, force_reinit])</p></td>
<td><p>Initializes <code class="xref py py-class docutils literal notranslate"><span class="pre">Parameter</span></code> s of this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and its children.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L1Loss.load" title="mxnet.gluon.loss.L1Loss.load"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load</span></code></a>(prefix)</p></td>
<td><p>Load a model saved using the <cite>save</cite> API</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L1Loss.load_dict" title="mxnet.gluon.loss.L1Loss.load_dict"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load_dict</span></code></a>(param_dict[, device, …])</p></td>
<td><p>Load parameters from dict</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L1Loss.load_parameters" title="mxnet.gluon.loss.L1Loss.load_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load_parameters</span></code></a>(filename[, device, …])</p></td>
<td><p>Load parameters from file previously saved by <cite>save_parameters</cite>.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L1Loss.optimize_for" title="mxnet.gluon.loss.L1Loss.optimize_for"><code class="xref py py-obj docutils literal notranslate"><span class="pre">optimize_for</span></code></a>(x, *args[, backend, clear, …])</p></td>
<td><p>Partitions the current HybridBlock and optimizes it for a given backend without executing a forward pass.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L1Loss.register_forward_hook" title="mxnet.gluon.loss.L1Loss.register_forward_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_forward_hook</span></code></a>(hook)</p></td>
<td><p>Registers a forward hook on the block.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L1Loss.register_forward_pre_hook" title="mxnet.gluon.loss.L1Loss.register_forward_pre_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_forward_pre_hook</span></code></a>(hook)</p></td>
<td><p>Registers a forward pre-hook on the block.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L1Loss.register_op_hook" title="mxnet.gluon.loss.L1Loss.register_op_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_op_hook</span></code></a>(callback[, monitor_all])</p></td>
<td><p>Install op hook for block recursively.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L1Loss.reset_ctx" title="mxnet.gluon.loss.L1Loss.reset_ctx"><code class="xref py py-obj docutils literal notranslate"><span class="pre">reset_ctx</span></code></a>(ctx)</p></td>
<td><p>This function has been deprecated.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L1Loss.reset_device" title="mxnet.gluon.loss.L1Loss.reset_device"><code class="xref py py-obj docutils literal notranslate"><span class="pre">reset_device</span></code></a>(device)</p></td>
<td><p>Re-assign all Parameters to other devices.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L1Loss.save" title="mxnet.gluon.loss.L1Loss.save"><code class="xref py py-obj docutils literal notranslate"><span class="pre">save</span></code></a>(prefix)</p></td>
<td><p>Save the model architecture and parameters to load again later</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L1Loss.save_parameters" title="mxnet.gluon.loss.L1Loss.save_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">save_parameters</span></code></a>(filename[, deduplicate])</p></td>
<td><p>Save parameters to file.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L1Loss.setattr" title="mxnet.gluon.loss.L1Loss.setattr"><code class="xref py py-obj docutils literal notranslate"><span class="pre">setattr</span></code></a>(name, value)</p></td>
<td><p>Set an attribute to a new value for all Parameters.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L1Loss.share_parameters" title="mxnet.gluon.loss.L1Loss.share_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">share_parameters</span></code></a>(shared)</p></td>
<td><p>Share parameters recursively inside the model.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L1Loss.summary" title="mxnet.gluon.loss.L1Loss.summary"><code class="xref py py-obj docutils literal notranslate"><span class="pre">summary</span></code></a>(*inputs)</p></td>
<td><p>Print the summary of the model’s output and parameters.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L1Loss.zero_grad" title="mxnet.gluon.loss.L1Loss.zero_grad"><code class="xref py py-obj docutils literal notranslate"><span class="pre">zero_grad</span></code></a>()</p></td>
<td><p>Sets all Parameters’ gradient buffer to 0.</p></td>
</tr>
</tbody>
</table>
<p><strong>Attributes</strong></p>
<table class="longtable docutils align-default">
<colgroup>
<col style="width: 10%" />
<col style="width: 90%" />
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L1Loss.params" title="mxnet.gluon.loss.L1Loss.params"><code class="xref py py-obj docutils literal notranslate"><span class="pre">params</span></code></a></p></td>
<td><p>Returns this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code>’s parameter dictionary (does not include its children’s parameters).</p></td>
</tr>
</tbody>
</table>
<p><cite>label</cite> and <cite>pred</cite> can have arbitrary shape as long as they have the same
number of elements.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>weight</strong> (<em>float</em><em> or </em><em>None</em>) – Global scalar weight for loss.</p></li>
<li><p><strong>batch_axis</strong> (<em>int</em><em>, </em><em>default 0</em>) – The axis that represents mini-batch.</p></li>
</ul>
</dd>
</dl>
<dl class="simple">
<dt>Inputs:</dt><dd><ul class="simple">
<li><p><strong>pred</strong>: prediction tensor with arbitrary shape</p></li>
<li><p><strong>label</strong>: target tensor with the same size as pred.</p></li>
<li><p><strong>sample_weight</strong>: element-wise weighting tensor. Must be broadcastable
to the same shape as pred. For example, if pred has shape (64, 10)
and you want to weigh each sample in the batch separately,
sample_weight should have shape (64, 1).</p></li>
</ul>
</dd>
<dt>Outputs:</dt><dd><ul class="simple">
<li><p><strong>loss</strong>: loss tensor with shape (batch_size,). Dimenions other than
batch_axis are averaged out.</p></li>
</ul>
</dd>
</dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L1Loss.apply">
<code class="sig-name descname">apply</code><span class="sig-paren">(</span><em class="sig-param">fn</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L1Loss.apply" title="Permalink to this definition"></a></dt>
<dd><p>Applies <code class="docutils literal notranslate"><span class="pre">fn</span></code> recursively to every child block as well as self.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>fn</strong> (<em>callable</em>) – Function to be applied to each submodule, of form <cite>fn(block)</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>this block</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L1Loss.collect_params">
<code class="sig-name descname">collect_params</code><span class="sig-paren">(</span><em class="sig-param">select=None</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L1Loss.collect_params" title="Permalink to this definition"></a></dt>
<dd><p>Returns a <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> containing this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and all of its
children’s Parameters(default), also can returns the select <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code>
which match some given regular expressions.</p>
<p>For example, collect the specified parameters in [‘conv1.weight’, ‘conv1.bias’, ‘fc.weight’,
‘fc.bias’]:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">collect_params</span><span class="p">(</span><span class="s1">&#39;conv1.weight|conv1.bias|fc.weight|fc.bias&#39;</span><span class="p">)</span>
</pre></div>
</div>
<p>or collect all parameters whose names end with ‘weight’ or ‘bias’, this can be done
using regular expressions:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">collect_params</span><span class="p">(</span><span class="s1">&#39;.*weight|.*bias&#39;</span><span class="p">)</span>
</pre></div>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>select</strong> (<em>str</em>) – regular expressions</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>The selected <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L1Loss.export">
<code class="sig-name descname">export</code><span class="sig-paren">(</span><em class="sig-param">path</em>, <em class="sig-param">epoch=0</em>, <em class="sig-param">remove_amp_cast=True</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L1Loss.export" title="Permalink to this definition"></a></dt>
<dd><p>Export HybridBlock to json format that can be loaded by
<cite>gluon.SymbolBlock.imports</cite> or the C++ interface.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>When there are only one input, it will have name <cite>data</cite>. When there
Are more than one inputs, they will be named as <cite>data0</cite>, <cite>data1</cite>, etc.</p>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>path</strong> (<em>str</em><em> or </em><em>None</em>) – Path to save model. Two files <cite>path-symbol.json</cite> and <cite>path-xxxx.params</cite>
will be created, where xxxx is the 4 digits epoch number.
If None, do not export to file but return Python Symbol object and
corresponding dictionary of parameters.</p></li>
<li><p><strong>epoch</strong> (<em>int</em>) – Epoch number of saved model.</p></li>
<li><p><strong>remove_amp_cast</strong> (<em>bool</em><em>, </em><em>optional</em>) – Whether to remove the amp_cast and amp_multicast operators, before saving the model.</p></li>
</ul>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p><ul class="simple">
<li><p><strong>symbol_filename</strong> (<em>str</em>) – Filename to which model symbols were saved, including <cite>path</cite> prefix.</p></li>
<li><p><strong>params_filename</strong> (<em>str</em>) – Filename to which model parameters were saved, including <cite>path</cite> prefix.</p></li>
</ul>
</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L1Loss.forward">
<code class="sig-name descname">forward</code><span class="sig-paren">(</span><em class="sig-param">pred</em>, <em class="sig-param">label</em>, <em class="sig-param">sample_weight=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#L1Loss.forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.L1Loss.forward" title="Permalink to this definition"></a></dt>
<dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>x</strong> (<a class="reference internal" href="../../legacy/symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li>
<li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L1Loss.hybridize">
<code class="sig-name descname">hybridize</code><span class="sig-paren">(</span><em class="sig-param">active=True</em>, <em class="sig-param">partition_if_dynamic=True</em>, <em class="sig-param">static_alloc=False</em>, <em class="sig-param">static_shape=False</em>, <em class="sig-param">inline_limit=2</em>, <em class="sig-param">forward_bulk_size=None</em>, <em class="sig-param">backward_bulk_size=None</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L1Loss.hybridize" title="Permalink to this definition"></a></dt>
<dd><p>Activates or deactivates <code class="xref py py-class docutils literal notranslate"><span class="pre">HybridBlock</span></code> s recursively. Has no effect on
non-hybrid children.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>active</strong> (<em>bool</em><em>, </em><em>default True</em>) – Whether to turn hybrid on or off.</p></li>
<li><p><strong>partition_if_dynamic</strong> (<em>bool</em><em>, </em><em>default False</em>) – whether to partition the graph when dynamic shape op exists</p></li>
<li><p><strong>static_alloc</strong> (<em>bool</em><em>, </em><em>default False</em>) – Statically allocate memory to improve speed. Memory usage may increase.</p></li>
<li><p><strong>static_shape</strong> (<em>bool</em><em>, </em><em>default False</em>) – Optimize for invariant input shapes between iterations. Must also
set static_alloc to True. Change of input shapes is still allowed
but slower.</p></li>
<li><p><strong>inline_limit</strong> (<em>optional int</em><em>, </em><em>default 2</em>) – Maximum number of operators that can be inlined.</p></li>
<li><p><strong>forward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during forward pass.</p></li>
<li><p><strong>backward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during backward pass.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L1Loss.infer_shape">
<code class="sig-name descname">infer_shape</code><span class="sig-paren">(</span><em class="sig-param">*args</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L1Loss.infer_shape" title="Permalink to this definition"></a></dt>
<dd><p>Infers shape of Parameters from inputs.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L1Loss.infer_type">
<code class="sig-name descname">infer_type</code><span class="sig-paren">(</span><em class="sig-param">*args</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L1Loss.infer_type" title="Permalink to this definition"></a></dt>
<dd><p>Infers data type of Parameters from inputs.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L1Loss.initialize">
<code class="sig-name descname">initialize</code><span class="sig-paren">(</span><em class="sig-param">init=&lt;mxnet.initializer.Uniform object&gt;</em>, <em class="sig-param">device=None</em>, <em class="sig-param">verbose=False</em>, <em class="sig-param">force_reinit=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L1Loss.initialize" title="Permalink to this definition"></a></dt>
<dd><p>Initializes <code class="xref py py-class docutils literal notranslate"><span class="pre">Parameter</span></code> s of this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and its children.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>init</strong> (<a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Global default Initializer to be used when <code class="xref py py-meth docutils literal notranslate"><span class="pre">Parameter.init()</span></code> is <code class="docutils literal notranslate"><span class="pre">None</span></code>.
Otherwise, <code class="xref py py-meth docutils literal notranslate"><span class="pre">Parameter.init()</span></code> takes precedence.</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em> or </em><em>list of Device</em>) – Keeps a copy of Parameters on one or many device(s).</p></li>
<li><p><strong>verbose</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to verbosely print out details on initialization.</p></li>
<li><p><strong>force_reinit</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to force re-initialization if parameter is already initialized.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L1Loss.load">
<code class="sig-name descname">load</code><span class="sig-paren">(</span><em class="sig-param">prefix</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L1Loss.load" title="Permalink to this definition"></a></dt>
<dd><p>Load a model saved using the <cite>save</cite> API</p>
<p>Reconfigures a model using the saved configuration. This function
does not regenerate the model architecture. It resets each Block’s
parameter UUIDs as they were when saved in order to match the names of the
saved parameters.</p>
<p>This function assumes the Blocks in the model were created in the same
order they were when the model was saved. This is because each Block is
uniquely identified by Block class name and a unique ID in order (since
its an OrderedDict) and uses the unique ID to denote that specific Block.</p>
<p>Assumes that the model is created in an identical order every time.
If the model is not able to be recreated deterministically do not
use this set of APIs to save/load your model.</p>
<p>For HybridBlocks, the cached_graph (Symbol &amp; inputs) and settings are
restored if it had been hybridized before saving.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>prefix</strong> (<em>str</em>) – The prefix to use in filenames for loading this model:
&lt;prefix&gt;-model.json and &lt;prefix&gt;-model.params</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L1Loss.load_dict">
<code class="sig-name descname">load_dict</code><span class="sig-paren">(</span><em class="sig-param">param_dict</em>, <em class="sig-param">device=None</em>, <em class="sig-param">allow_missing=False</em>, <em class="sig-param">ignore_extra=False</em>, <em class="sig-param">cast_dtype=False</em>, <em class="sig-param">dtype_source='current'</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L1Loss.load_dict" title="Permalink to this definition"></a></dt>
<dd><p>Load parameters from dict</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>param_dict</strong> (<em>dict</em>) – Dictionary containing model parameters</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em>, </em><em>optional</em>) – Device context on which the memory is allocated. Default is
<cite>mxnet.device.current_device()</cite>.</p></li>
<li><p><strong>allow_missing</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently skip loading parameters not represented in the file.</p></li>
<li><p><strong>ignore_extra</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently ignore parameters from the file that are not
present in this dict.</p></li>
<li><p><strong>cast_dtype</strong> (<em>bool</em><em>, </em><em>default False</em>) – Cast the data type of the NDArray loaded from the checkpoint to the dtype
provided by the Parameter if any</p></li>
<li><p><strong>dtype_source</strong> (<em>str</em><em>, </em><em>default 'current'</em>) – must be in {‘current’, ‘saved’}
Only valid if cast_dtype=True, specify the source of the dtype for casting
the parameters</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L1Loss.load_parameters">
<code class="sig-name descname">load_parameters</code><span class="sig-paren">(</span><em class="sig-param">filename</em>, <em class="sig-param">device=None</em>, <em class="sig-param">allow_missing=False</em>, <em class="sig-param">ignore_extra=False</em>, <em class="sig-param">cast_dtype=False</em>, <em class="sig-param">dtype_source='current'</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L1Loss.load_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Load parameters from file previously saved by <cite>save_parameters</cite>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>filename</strong> (<em>str</em>) – Path to parameter file.</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em> or </em><em>list of Device</em><em>, </em><em>default cpu</em><em>(</em><em>)</em>) – Device(s) to initialize loaded parameters on.</p></li>
<li><p><strong>allow_missing</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently skip loading parameters not represents in the file.</p></li>
<li><p><strong>ignore_extra</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently ignore parameters from the file that are not
present in this Block.</p></li>
<li><p><strong>cast_dtype</strong> (<em>bool</em><em>, </em><em>default False</em>) – Cast the data type of the NDArray loaded from the checkpoint to the dtype
provided by the Parameter if any.</p></li>
<li><p><strong>dtype_source</strong> (<em>str</em><em>, </em><em>default 'current'</em>) – must be in {‘current’, ‘saved’}
Only valid if cast_dtype=True, specify the source of the dtype for casting
the parameters</p></li>
</ul>
</dd>
</dl>
<p class="rubric">References</p>
<p><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L1Loss.optimize_for">
<code class="sig-name descname">optimize_for</code><span class="sig-paren">(</span><em class="sig-param">x</em>, <em class="sig-param">*args</em>, <em class="sig-param">backend=None</em>, <em class="sig-param">clear=False</em>, <em class="sig-param">partition_if_dynamic=True</em>, <em class="sig-param">static_alloc=False</em>, <em class="sig-param">static_shape=False</em>, <em class="sig-param">inline_limit=2</em>, <em class="sig-param">forward_bulk_size=None</em>, <em class="sig-param">backward_bulk_size=None</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L1Loss.optimize_for" title="Permalink to this definition"></a></dt>
<dd><p>Partitions the current HybridBlock and optimizes it for a given backend
without executing a forward pass. Modifies the HybridBlock in-place.</p>
<p>Immediately partitions a HybridBlock using the specified backend. Combines
the work done in the hybridize API with part of the work done in the forward
pass without calling the CachedOp. Can be used in place of hybridize,
afterwards <cite>export</cite> can be called or inference can be run. See README.md in
example/extensions/lib_subgraph/README.md for more details.</p>
<p class="rubric">Examples</p>
<p># partition and then export to file
block.optimize_for(x, backend=’myPart’)
block.export(‘partitioned’)</p>
<p># partition and then run inference
block.optimize_for(x, backend=’myPart’)
block(x)</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>x</strong> (<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – first input to model</p></li>
<li><p><strong>*args</strong> (<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – other inputs to model</p></li>
<li><p><strong>backend</strong> (<em>str</em>) – The name of backend, as registered in <cite>SubgraphBackendRegistry</cite>, default None</p></li>
<li><p><strong>backend_opts</strong> (<em>dict of user-specified options to pass to the backend for partitioning</em><em>, </em><em>optional</em>) – Passed on to <cite>PrePartition</cite> and <cite>PostPartition</cite> functions of <cite>SubgraphProperty</cite></p></li>
<li><p><strong>clear</strong> (<em>bool</em><em>, </em><em>default False</em>) – clears any previous optimizations</p></li>
<li><p><strong>partition_if_dynamic</strong> (<em>bool</em><em>, </em><em>default False</em>) – whether to partition the graph when dynamic shape op exists</p></li>
<li><p><strong>static_alloc</strong> (<em>bool</em><em>, </em><em>default False</em>) – Statically allocate memory to improve speed. Memory usage may increase.</p></li>
<li><p><strong>static_shape</strong> (<em>bool</em><em>, </em><em>default False</em>) – Optimize for invariant input shapes between iterations. Must also
set static_alloc to True. Change of input shapes is still allowed
but slower.</p></li>
<li><p><strong>inline_limit</strong> (<em>optional int</em><em>, </em><em>default 2</em>) – Maximum number of operators that can be inlined.</p></li>
<li><p><strong>forward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during forward pass.</p></li>
<li><p><strong>backward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during backward pass.</p></li>
<li><p><strong>**kwargs</strong> (<em>The backend options</em><em>, </em><em>optional</em>) – Passed on to <cite>PrePartition</cite> and <cite>PostPartition</cite> functions of <cite>SubgraphProperty</cite></p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L1Loss.params">
<em class="property">property </em><code class="sig-name descname">params</code><a class="headerlink" href="#mxnet.gluon.loss.L1Loss.params" title="Permalink to this definition"></a></dt>
<dd><p>Returns this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code>’s parameter dictionary (does not include its
children’s parameters).</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L1Loss.register_forward_hook">
<code class="sig-name descname">register_forward_hook</code><span class="sig-paren">(</span><em class="sig-param">hook</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L1Loss.register_forward_hook" title="Permalink to this definition"></a></dt>
<dd><p>Registers a forward hook on the block.</p>
<p>The hook function is called immediately after <a class="reference internal" href="#mxnet.gluon.loss.L1Loss.forward" title="mxnet.gluon.loss.L1Loss.forward"><code class="xref py py-func docutils literal notranslate"><span class="pre">forward()</span></code></a>.
It should not modify the input or output.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>hook</strong> (<em>callable</em>) – The forward hook function of form <cite>hook(block, input, output) -&gt; None</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.utils.HookHandle</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L1Loss.register_forward_pre_hook">
<code class="sig-name descname">register_forward_pre_hook</code><span class="sig-paren">(</span><em class="sig-param">hook</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L1Loss.register_forward_pre_hook" title="Permalink to this definition"></a></dt>
<dd><p>Registers a forward pre-hook on the block.</p>
<p>The hook function is called immediately before <a class="reference internal" href="#mxnet.gluon.loss.L1Loss.forward" title="mxnet.gluon.loss.L1Loss.forward"><code class="xref py py-func docutils literal notranslate"><span class="pre">forward()</span></code></a>.
It should not modify the input or output.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>hook</strong> (<em>callable</em>) – The forward hook function of form <cite>hook(block, input) -&gt; None</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.utils.HookHandle</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L1Loss.register_op_hook">
<code class="sig-name descname">register_op_hook</code><span class="sig-paren">(</span><em class="sig-param">callback</em>, <em class="sig-param">monitor_all=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L1Loss.register_op_hook" title="Permalink to this definition"></a></dt>
<dd><p>Install op hook for block recursively.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>callback</strong> (<em>function</em>) – Function called to inspect the values of the intermediate outputs
of blocks after hybridization. It takes 3 parameters:
name of the tensor being inspected (str)
name of the operator producing or consuming that tensor (str)
tensor being inspected (NDArray).</p></li>
<li><p><strong>monitor_all</strong> (<em>bool</em><em>, </em><em>default False</em>) – If True, monitor both input and output, otherwise monitor output only.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L1Loss.reset_ctx">
<code class="sig-name descname">reset_ctx</code><span class="sig-paren">(</span><em class="sig-param">ctx</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L1Loss.reset_ctx" title="Permalink to this definition"></a></dt>
<dd><p>This function has been deprecated. Please refer to <code class="docutils literal notranslate"><span class="pre">HybridBlock.reset_device</span></code>.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L1Loss.reset_device">
<code class="sig-name descname">reset_device</code><span class="sig-paren">(</span><em class="sig-param">device</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L1Loss.reset_device" title="Permalink to this definition"></a></dt>
<dd><p>Re-assign all Parameters to other devices. If the Block is hybridized, it will reset the _cached_op_args.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>device</strong> (Device or list of Device, default <code class="xref py py-meth docutils literal notranslate"><span class="pre">device.current_device()</span></code>.) – Assign Parameter to given device. If device is a list of Device, a
copy will be made for each device.</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L1Loss.save">
<code class="sig-name descname">save</code><span class="sig-paren">(</span><em class="sig-param">prefix</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L1Loss.save" title="Permalink to this definition"></a></dt>
<dd><p>Save the model architecture and parameters to load again later</p>
<p>Saves the model architecture as a nested dictionary where each Block
in the model is a dictionary and its children are sub-dictionaries.</p>
<p>Each Block is uniquely identified by Block class name and a unique ID.
We save each Block’s parameter UUID to restore later in order to match
the saved parameters.</p>
<p>Recursively traverses a Block’s children in order (since its an
OrderedDict) and uses the unique ID to denote that specific Block.</p>
<p>Assumes that the model is created in an identical order every time.
If the model is not able to be recreated deterministically do not
use this set of APIs to save/load your model.</p>
<p>For HybridBlocks, the cached_graph is saved (Symbol &amp; inputs) if
it has already been hybridized.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>prefix</strong> (<em>str</em>) – The prefix to use in filenames for saving this model:
&lt;prefix&gt;-model.json and &lt;prefix&gt;-model.params</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L1Loss.save_parameters">
<code class="sig-name descname">save_parameters</code><span class="sig-paren">(</span><em class="sig-param">filename</em>, <em class="sig-param">deduplicate=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L1Loss.save_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Save parameters to file.</p>
<p>Saved parameters can only be loaded with <cite>load_parameters</cite>. Note that this
method only saves parameters, not model structure. If you want to save
model structures, please use <code class="xref py py-meth docutils literal notranslate"><span class="pre">HybridBlock.export()</span></code>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>filename</strong> (<em>str</em>) – Path to file.</p></li>
<li><p><strong>deduplicate</strong> (<em>bool</em><em>, </em><em>default False</em>) – If True, save shared parameters only once. Otherwise, if a Block
contains multiple sub-blocks that share parameters, each of the
shared parameters will be separately saved for every sub-block.</p></li>
</ul>
</dd>
</dl>
<p class="rubric">References</p>
<p><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L1Loss.setattr">
<code class="sig-name descname">setattr</code><span class="sig-paren">(</span><em class="sig-param">name</em>, <em class="sig-param">value</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L1Loss.setattr" title="Permalink to this definition"></a></dt>
<dd><p>Set an attribute to a new value for all Parameters.</p>
<p>For example, set grad_req to null if you don’t need gradient w.r.t a
model’s Parameters:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">setattr</span><span class="p">(</span><span class="s1">&#39;grad_req&#39;</span><span class="p">,</span> <span class="s1">&#39;null&#39;</span><span class="p">)</span>
</pre></div>
</div>
<p>or change the learning rate multiplier:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">setattr</span><span class="p">(</span><span class="s1">&#39;lr_mult&#39;</span><span class="p">,</span> <span class="mf">0.5</span><span class="p">)</span>
</pre></div>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>name</strong> (<em>str</em>) – Name of the attribute.</p></li>
<li><p><strong>value</strong> (<em>valid type for attribute name</em>) – The new value for the attribute.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L1Loss.share_parameters">
<code class="sig-name descname">share_parameters</code><span class="sig-paren">(</span><em class="sig-param">shared</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L1Loss.share_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Share parameters recursively inside the model.</p>
<p>For example, if you want <code class="docutils literal notranslate"><span class="pre">dense1</span></code> to share <code class="docutils literal notranslate"><span class="pre">dense0</span></code>’s weights, you can do:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">dense0</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">)</span>
<span class="n">dense1</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">)</span>
<span class="n">dense1</span><span class="o">.</span><span class="n">share_parameters</span><span class="p">(</span><span class="n">dense0</span><span class="o">.</span><span class="n">collect_params</span><span class="p">())</span>
</pre></div>
</div>
<dl class="simple">
<dt>which equals to</dt><dd><p>dense1.weight = dense0.weight
dense1.bias = dense0.bias</p>
</dd>
</dl>
<p>Note that unlike the <cite>load_parameters</cite> or <cite>load_dict</cite> functions,
<cite>share_parameters</cite> results in the <cite>Parameter</cite> object being shared (or
tied) between the models, whereas <cite>load_parameters</cite> or <cite>load_dict</cite> only
set the value of the data dictionary of a model. If you call
<cite>load_parameters</cite> or <cite>load_dict</cite> after <cite>share_parameters</cite>, the loaded
value will be reflected in all networks that use the shared (or tied)
<cite>Parameter</cite> object.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>shared</strong> (<em>Dict</em>) – Dict of the shared parameters.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>this block</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L1Loss.summary">
<code class="sig-name descname">summary</code><span class="sig-paren">(</span><em class="sig-param">*inputs</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L1Loss.summary" title="Permalink to this definition"></a></dt>
<dd><p>Print the summary of the model’s output and parameters.</p>
<p>The network must have been initialized, and must not have been hybridized.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>inputs</strong> (<em>object</em>) – Any input that the model supports. For any tensor in the input, only
<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.ndarray.NDArray</span></code></a> is supported.</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.L1Loss.zero_grad">
<code class="sig-name descname">zero_grad</code><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.L1Loss.zero_grad" title="Permalink to this definition"></a></dt>
<dd><p>Sets all Parameters’ gradient buffer to 0.</p>
</dd></dl>
</dd></dl>
<dl class="class">
<dt id="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss">
<em class="property">class </em><code class="sig-name descname">SigmoidBinaryCrossEntropyLoss</code><span class="sig-paren">(</span><em class="sig-param">from_sigmoid=False</em>, <em class="sig-param">weight=None</em>, <em class="sig-param">batch_axis=0</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#SigmoidBinaryCrossEntropyLoss"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss" title="Permalink to this definition"></a></dt>
<dd><p>Bases: <a class="reference internal" href="#mxnet.gluon.loss.Loss" title="mxnet.gluon.loss.Loss"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.loss.Loss</span></code></a></p>
<p>The cross-entropy loss for binary classification. (alias: SigmoidBCELoss)</p>
<p>BCE loss is useful when training logistic regression. If <cite>from_sigmoid</cite>
is False (default), this loss computes:</p>
<div class="math notranslate nohighlight">
\[ \begin{align}\begin{aligned}prob = \frac{1}{1 + \exp(-{pred})}\\L = - \sum_i {label}_i * \log({prob}_i) * pos\_weight +
(1 - {label}_i) * \log(1 - {prob}_i)\end{aligned}\end{align} \]</div>
<p><strong>Methods</strong></p>
<table class="longtable docutils align-default">
<colgroup>
<col style="width: 10%" />
<col style="width: 90%" />
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.apply" title="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.apply"><code class="xref py py-obj docutils literal notranslate"><span class="pre">apply</span></code></a>(fn)</p></td>
<td><p>Applies <code class="docutils literal notranslate"><span class="pre">fn</span></code> recursively to every child block as well as self.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.collect_params" title="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.collect_params"><code class="xref py py-obj docutils literal notranslate"><span class="pre">collect_params</span></code></a>([select])</p></td>
<td><p>Returns a <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> containing this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and all of its children’s Parameters(default), also can returns the select <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> which match some given regular expressions.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.export" title="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.export"><code class="xref py py-obj docutils literal notranslate"><span class="pre">export</span></code></a>(path[, epoch, remove_amp_cast])</p></td>
<td><p>Export HybridBlock to json format that can be loaded by <cite>gluon.SymbolBlock.imports</cite> or the C++ interface.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.forward" title="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">forward</span></code></a>(pred, label[, sample_weight, pos_weight])</p></td>
<td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.hybridize" title="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.hybridize"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybridize</span></code></a>([active, partition_if_dynamic, …])</p></td>
<td><p>Activates or deactivates <code class="xref py py-class docutils literal notranslate"><span class="pre">HybridBlock</span></code> s recursively.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.infer_shape" title="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.infer_shape"><code class="xref py py-obj docutils literal notranslate"><span class="pre">infer_shape</span></code></a>(*args)</p></td>
<td><p>Infers shape of Parameters from inputs.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.infer_type" title="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.infer_type"><code class="xref py py-obj docutils literal notranslate"><span class="pre">infer_type</span></code></a>(*args)</p></td>
<td><p>Infers data type of Parameters from inputs.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.initialize" title="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.initialize"><code class="xref py py-obj docutils literal notranslate"><span class="pre">initialize</span></code></a>([init, device, verbose, force_reinit])</p></td>
<td><p>Initializes <code class="xref py py-class docutils literal notranslate"><span class="pre">Parameter</span></code> s of this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and its children.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.load" title="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.load"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load</span></code></a>(prefix)</p></td>
<td><p>Load a model saved using the <cite>save</cite> API</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.load_dict" title="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.load_dict"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load_dict</span></code></a>(param_dict[, device, …])</p></td>
<td><p>Load parameters from dict</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.load_parameters" title="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.load_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load_parameters</span></code></a>(filename[, device, …])</p></td>
<td><p>Load parameters from file previously saved by <cite>save_parameters</cite>.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.optimize_for" title="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.optimize_for"><code class="xref py py-obj docutils literal notranslate"><span class="pre">optimize_for</span></code></a>(x, *args[, backend, clear, …])</p></td>
<td><p>Partitions the current HybridBlock and optimizes it for a given backend without executing a forward pass.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.register_forward_hook" title="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.register_forward_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_forward_hook</span></code></a>(hook)</p></td>
<td><p>Registers a forward hook on the block.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.register_forward_pre_hook" title="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.register_forward_pre_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_forward_pre_hook</span></code></a>(hook)</p></td>
<td><p>Registers a forward pre-hook on the block.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.register_op_hook" title="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.register_op_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_op_hook</span></code></a>(callback[, monitor_all])</p></td>
<td><p>Install op hook for block recursively.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.reset_ctx" title="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.reset_ctx"><code class="xref py py-obj docutils literal notranslate"><span class="pre">reset_ctx</span></code></a>(ctx)</p></td>
<td><p>This function has been deprecated.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.reset_device" title="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.reset_device"><code class="xref py py-obj docutils literal notranslate"><span class="pre">reset_device</span></code></a>(device)</p></td>
<td><p>Re-assign all Parameters to other devices.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.save" title="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.save"><code class="xref py py-obj docutils literal notranslate"><span class="pre">save</span></code></a>(prefix)</p></td>
<td><p>Save the model architecture and parameters to load again later</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.save_parameters" title="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.save_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">save_parameters</span></code></a>(filename[, deduplicate])</p></td>
<td><p>Save parameters to file.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.setattr" title="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.setattr"><code class="xref py py-obj docutils literal notranslate"><span class="pre">setattr</span></code></a>(name, value)</p></td>
<td><p>Set an attribute to a new value for all Parameters.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.share_parameters" title="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.share_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">share_parameters</span></code></a>(shared)</p></td>
<td><p>Share parameters recursively inside the model.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.summary" title="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.summary"><code class="xref py py-obj docutils literal notranslate"><span class="pre">summary</span></code></a>(*inputs)</p></td>
<td><p>Print the summary of the model’s output and parameters.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.zero_grad" title="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.zero_grad"><code class="xref py py-obj docutils literal notranslate"><span class="pre">zero_grad</span></code></a>()</p></td>
<td><p>Sets all Parameters’ gradient buffer to 0.</p></td>
</tr>
</tbody>
</table>
<p><strong>Attributes</strong></p>
<table class="longtable docutils align-default">
<colgroup>
<col style="width: 10%" />
<col style="width: 90%" />
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.params" title="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.params"><code class="xref py py-obj docutils literal notranslate"><span class="pre">params</span></code></a></p></td>
<td><p>Returns this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code>’s parameter dictionary (does not include its children’s parameters).</p></td>
</tr>
</tbody>
</table>
<p>If <cite>from_sigmoid</cite> is True, this loss computes:</p>
<div class="math notranslate nohighlight">
\[L = - \sum_i {label}_i * \log({pred}_i) * pos\_weight +
(1 - {label}_i) * \log(1 - {pred}_i)\]</div>
<p>A tensor <cite>pos_weight &gt; 1</cite> decreases the false negative count, hence increasing
the recall.
Conversely setting <cite>pos_weight &lt; 1</cite> decreases the false positive count and
increases the precision.</p>
<p><cite>pred</cite> and <cite>label</cite> can have arbitrary shape as long as they have the same
number of elements.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>from_sigmoid</strong> (bool, default is <cite>False</cite>) – Whether the input is from the output of sigmoid. Set this to false will make
the loss calculate sigmoid and BCE together, which is more numerically
stable through log-sum-exp trick.</p></li>
<li><p><strong>weight</strong> (<em>float</em><em> or </em><em>None</em>) – Global scalar weight for loss.</p></li>
<li><p><strong>batch_axis</strong> (<em>int</em><em>, </em><em>default 0</em>) – The axis that represents mini-batch.</p></li>
</ul>
</dd>
</dl>
<dl class="simple">
<dt>Inputs:</dt><dd><ul class="simple">
<li><p><strong>pred</strong>: prediction tensor with arbitrary shape</p></li>
<li><p><strong>label</strong>: target tensor with values in range <cite>[0, 1]</cite>. Must have the
same size as <cite>pred</cite>.</p></li>
<li><p><strong>sample_weight</strong>: element-wise weighting tensor. Must be broadcastable
to the same shape as pred. For example, if pred has shape (64, 10)
and you want to weigh each sample in the batch separately,
sample_weight should have shape (64, 1).</p></li>
<li><p><strong>pos_weight</strong>: a weighting tensor of positive examples. Must be a vector with length
equal to the number of classes.For example, if pred has shape (64, 10),
pos_weight should have shape (1, 10).</p></li>
</ul>
</dd>
<dt>Outputs:</dt><dd><ul class="simple">
<li><p><strong>loss</strong>: loss tensor with shape (batch_size,). Dimenions other than
batch_axis are averaged out.</p></li>
</ul>
</dd>
</dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.apply">
<code class="sig-name descname">apply</code><span class="sig-paren">(</span><em class="sig-param">fn</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.apply" title="Permalink to this definition"></a></dt>
<dd><p>Applies <code class="docutils literal notranslate"><span class="pre">fn</span></code> recursively to every child block as well as self.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>fn</strong> (<em>callable</em>) – Function to be applied to each submodule, of form <cite>fn(block)</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>this block</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.collect_params">
<code class="sig-name descname">collect_params</code><span class="sig-paren">(</span><em class="sig-param">select=None</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.collect_params" title="Permalink to this definition"></a></dt>
<dd><p>Returns a <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> containing this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and all of its
children’s Parameters(default), also can returns the select <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code>
which match some given regular expressions.</p>
<p>For example, collect the specified parameters in [‘conv1.weight’, ‘conv1.bias’, ‘fc.weight’,
‘fc.bias’]:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">collect_params</span><span class="p">(</span><span class="s1">&#39;conv1.weight|conv1.bias|fc.weight|fc.bias&#39;</span><span class="p">)</span>
</pre></div>
</div>
<p>or collect all parameters whose names end with ‘weight’ or ‘bias’, this can be done
using regular expressions:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">collect_params</span><span class="p">(</span><span class="s1">&#39;.*weight|.*bias&#39;</span><span class="p">)</span>
</pre></div>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>select</strong> (<em>str</em>) – regular expressions</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>The selected <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.export">
<code class="sig-name descname">export</code><span class="sig-paren">(</span><em class="sig-param">path</em>, <em class="sig-param">epoch=0</em>, <em class="sig-param">remove_amp_cast=True</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.export" title="Permalink to this definition"></a></dt>
<dd><p>Export HybridBlock to json format that can be loaded by
<cite>gluon.SymbolBlock.imports</cite> or the C++ interface.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>When there are only one input, it will have name <cite>data</cite>. When there
Are more than one inputs, they will be named as <cite>data0</cite>, <cite>data1</cite>, etc.</p>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>path</strong> (<em>str</em><em> or </em><em>None</em>) – Path to save model. Two files <cite>path-symbol.json</cite> and <cite>path-xxxx.params</cite>
will be created, where xxxx is the 4 digits epoch number.
If None, do not export to file but return Python Symbol object and
corresponding dictionary of parameters.</p></li>
<li><p><strong>epoch</strong> (<em>int</em>) – Epoch number of saved model.</p></li>
<li><p><strong>remove_amp_cast</strong> (<em>bool</em><em>, </em><em>optional</em>) – Whether to remove the amp_cast and amp_multicast operators, before saving the model.</p></li>
</ul>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p><ul class="simple">
<li><p><strong>symbol_filename</strong> (<em>str</em>) – Filename to which model symbols were saved, including <cite>path</cite> prefix.</p></li>
<li><p><strong>params_filename</strong> (<em>str</em>) – Filename to which model parameters were saved, including <cite>path</cite> prefix.</p></li>
</ul>
</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.forward">
<code class="sig-name descname">forward</code><span class="sig-paren">(</span><em class="sig-param">pred</em>, <em class="sig-param">label</em>, <em class="sig-param">sample_weight=None</em>, <em class="sig-param">pos_weight=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#SigmoidBinaryCrossEntropyLoss.forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.forward" title="Permalink to this definition"></a></dt>
<dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>x</strong> (<a class="reference internal" href="../../legacy/symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li>
<li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.hybridize">
<code class="sig-name descname">hybridize</code><span class="sig-paren">(</span><em class="sig-param">active=True</em>, <em class="sig-param">partition_if_dynamic=True</em>, <em class="sig-param">static_alloc=False</em>, <em class="sig-param">static_shape=False</em>, <em class="sig-param">inline_limit=2</em>, <em class="sig-param">forward_bulk_size=None</em>, <em class="sig-param">backward_bulk_size=None</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.hybridize" title="Permalink to this definition"></a></dt>
<dd><p>Activates or deactivates <code class="xref py py-class docutils literal notranslate"><span class="pre">HybridBlock</span></code> s recursively. Has no effect on
non-hybrid children.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>active</strong> (<em>bool</em><em>, </em><em>default True</em>) – Whether to turn hybrid on or off.</p></li>
<li><p><strong>partition_if_dynamic</strong> (<em>bool</em><em>, </em><em>default False</em>) – whether to partition the graph when dynamic shape op exists</p></li>
<li><p><strong>static_alloc</strong> (<em>bool</em><em>, </em><em>default False</em>) – Statically allocate memory to improve speed. Memory usage may increase.</p></li>
<li><p><strong>static_shape</strong> (<em>bool</em><em>, </em><em>default False</em>) – Optimize for invariant input shapes between iterations. Must also
set static_alloc to True. Change of input shapes is still allowed
but slower.</p></li>
<li><p><strong>inline_limit</strong> (<em>optional int</em><em>, </em><em>default 2</em>) – Maximum number of operators that can be inlined.</p></li>
<li><p><strong>forward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during forward pass.</p></li>
<li><p><strong>backward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during backward pass.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.infer_shape">
<code class="sig-name descname">infer_shape</code><span class="sig-paren">(</span><em class="sig-param">*args</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.infer_shape" title="Permalink to this definition"></a></dt>
<dd><p>Infers shape of Parameters from inputs.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.infer_type">
<code class="sig-name descname">infer_type</code><span class="sig-paren">(</span><em class="sig-param">*args</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.infer_type" title="Permalink to this definition"></a></dt>
<dd><p>Infers data type of Parameters from inputs.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.initialize">
<code class="sig-name descname">initialize</code><span class="sig-paren">(</span><em class="sig-param">init=&lt;mxnet.initializer.Uniform object&gt;</em>, <em class="sig-param">device=None</em>, <em class="sig-param">verbose=False</em>, <em class="sig-param">force_reinit=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.initialize" title="Permalink to this definition"></a></dt>
<dd><p>Initializes <code class="xref py py-class docutils literal notranslate"><span class="pre">Parameter</span></code> s of this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and its children.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>init</strong> (<a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Global default Initializer to be used when <code class="xref py py-meth docutils literal notranslate"><span class="pre">Parameter.init()</span></code> is <code class="docutils literal notranslate"><span class="pre">None</span></code>.
Otherwise, <code class="xref py py-meth docutils literal notranslate"><span class="pre">Parameter.init()</span></code> takes precedence.</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em> or </em><em>list of Device</em>) – Keeps a copy of Parameters on one or many device(s).</p></li>
<li><p><strong>verbose</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to verbosely print out details on initialization.</p></li>
<li><p><strong>force_reinit</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to force re-initialization if parameter is already initialized.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.load">
<code class="sig-name descname">load</code><span class="sig-paren">(</span><em class="sig-param">prefix</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.load" title="Permalink to this definition"></a></dt>
<dd><p>Load a model saved using the <cite>save</cite> API</p>
<p>Reconfigures a model using the saved configuration. This function
does not regenerate the model architecture. It resets each Block’s
parameter UUIDs as they were when saved in order to match the names of the
saved parameters.</p>
<p>This function assumes the Blocks in the model were created in the same
order they were when the model was saved. This is because each Block is
uniquely identified by Block class name and a unique ID in order (since
its an OrderedDict) and uses the unique ID to denote that specific Block.</p>
<p>Assumes that the model is created in an identical order every time.
If the model is not able to be recreated deterministically do not
use this set of APIs to save/load your model.</p>
<p>For HybridBlocks, the cached_graph (Symbol &amp; inputs) and settings are
restored if it had been hybridized before saving.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>prefix</strong> (<em>str</em>) – The prefix to use in filenames for loading this model:
&lt;prefix&gt;-model.json and &lt;prefix&gt;-model.params</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.load_dict">
<code class="sig-name descname">load_dict</code><span class="sig-paren">(</span><em class="sig-param">param_dict</em>, <em class="sig-param">device=None</em>, <em class="sig-param">allow_missing=False</em>, <em class="sig-param">ignore_extra=False</em>, <em class="sig-param">cast_dtype=False</em>, <em class="sig-param">dtype_source='current'</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.load_dict" title="Permalink to this definition"></a></dt>
<dd><p>Load parameters from dict</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>param_dict</strong> (<em>dict</em>) – Dictionary containing model parameters</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em>, </em><em>optional</em>) – Device context on which the memory is allocated. Default is
<cite>mxnet.device.current_device()</cite>.</p></li>
<li><p><strong>allow_missing</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently skip loading parameters not represented in the file.</p></li>
<li><p><strong>ignore_extra</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently ignore parameters from the file that are not
present in this dict.</p></li>
<li><p><strong>cast_dtype</strong> (<em>bool</em><em>, </em><em>default False</em>) – Cast the data type of the NDArray loaded from the checkpoint to the dtype
provided by the Parameter if any</p></li>
<li><p><strong>dtype_source</strong> (<em>str</em><em>, </em><em>default 'current'</em>) – must be in {‘current’, ‘saved’}
Only valid if cast_dtype=True, specify the source of the dtype for casting
the parameters</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.load_parameters">
<code class="sig-name descname">load_parameters</code><span class="sig-paren">(</span><em class="sig-param">filename</em>, <em class="sig-param">device=None</em>, <em class="sig-param">allow_missing=False</em>, <em class="sig-param">ignore_extra=False</em>, <em class="sig-param">cast_dtype=False</em>, <em class="sig-param">dtype_source='current'</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.load_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Load parameters from file previously saved by <cite>save_parameters</cite>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>filename</strong> (<em>str</em>) – Path to parameter file.</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em> or </em><em>list of Device</em><em>, </em><em>default cpu</em><em>(</em><em>)</em>) – Device(s) to initialize loaded parameters on.</p></li>
<li><p><strong>allow_missing</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently skip loading parameters not represents in the file.</p></li>
<li><p><strong>ignore_extra</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently ignore parameters from the file that are not
present in this Block.</p></li>
<li><p><strong>cast_dtype</strong> (<em>bool</em><em>, </em><em>default False</em>) – Cast the data type of the NDArray loaded from the checkpoint to the dtype
provided by the Parameter if any.</p></li>
<li><p><strong>dtype_source</strong> (<em>str</em><em>, </em><em>default 'current'</em>) – must be in {‘current’, ‘saved’}
Only valid if cast_dtype=True, specify the source of the dtype for casting
the parameters</p></li>
</ul>
</dd>
</dl>
<p class="rubric">References</p>
<p><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.optimize_for">
<code class="sig-name descname">optimize_for</code><span class="sig-paren">(</span><em class="sig-param">x</em>, <em class="sig-param">*args</em>, <em class="sig-param">backend=None</em>, <em class="sig-param">clear=False</em>, <em class="sig-param">partition_if_dynamic=True</em>, <em class="sig-param">static_alloc=False</em>, <em class="sig-param">static_shape=False</em>, <em class="sig-param">inline_limit=2</em>, <em class="sig-param">forward_bulk_size=None</em>, <em class="sig-param">backward_bulk_size=None</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.optimize_for" title="Permalink to this definition"></a></dt>
<dd><p>Partitions the current HybridBlock and optimizes it for a given backend
without executing a forward pass. Modifies the HybridBlock in-place.</p>
<p>Immediately partitions a HybridBlock using the specified backend. Combines
the work done in the hybridize API with part of the work done in the forward
pass without calling the CachedOp. Can be used in place of hybridize,
afterwards <cite>export</cite> can be called or inference can be run. See README.md in
example/extensions/lib_subgraph/README.md for more details.</p>
<p class="rubric">Examples</p>
<p># partition and then export to file
block.optimize_for(x, backend=’myPart’)
block.export(‘partitioned’)</p>
<p># partition and then run inference
block.optimize_for(x, backend=’myPart’)
block(x)</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>x</strong> (<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – first input to model</p></li>
<li><p><strong>*args</strong> (<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – other inputs to model</p></li>
<li><p><strong>backend</strong> (<em>str</em>) – The name of backend, as registered in <cite>SubgraphBackendRegistry</cite>, default None</p></li>
<li><p><strong>backend_opts</strong> (<em>dict of user-specified options to pass to the backend for partitioning</em><em>, </em><em>optional</em>) – Passed on to <cite>PrePartition</cite> and <cite>PostPartition</cite> functions of <cite>SubgraphProperty</cite></p></li>
<li><p><strong>clear</strong> (<em>bool</em><em>, </em><em>default False</em>) – clears any previous optimizations</p></li>
<li><p><strong>partition_if_dynamic</strong> (<em>bool</em><em>, </em><em>default False</em>) – whether to partition the graph when dynamic shape op exists</p></li>
<li><p><strong>static_alloc</strong> (<em>bool</em><em>, </em><em>default False</em>) – Statically allocate memory to improve speed. Memory usage may increase.</p></li>
<li><p><strong>static_shape</strong> (<em>bool</em><em>, </em><em>default False</em>) – Optimize for invariant input shapes between iterations. Must also
set static_alloc to True. Change of input shapes is still allowed
but slower.</p></li>
<li><p><strong>inline_limit</strong> (<em>optional int</em><em>, </em><em>default 2</em>) – Maximum number of operators that can be inlined.</p></li>
<li><p><strong>forward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during forward pass.</p></li>
<li><p><strong>backward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during backward pass.</p></li>
<li><p><strong>**kwargs</strong> (<em>The backend options</em><em>, </em><em>optional</em>) – Passed on to <cite>PrePartition</cite> and <cite>PostPartition</cite> functions of <cite>SubgraphProperty</cite></p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.params">
<em class="property">property </em><code class="sig-name descname">params</code><a class="headerlink" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.params" title="Permalink to this definition"></a></dt>
<dd><p>Returns this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code>’s parameter dictionary (does not include its
children’s parameters).</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.register_forward_hook">
<code class="sig-name descname">register_forward_hook</code><span class="sig-paren">(</span><em class="sig-param">hook</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.register_forward_hook" title="Permalink to this definition"></a></dt>
<dd><p>Registers a forward hook on the block.</p>
<p>The hook function is called immediately after <a class="reference internal" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.forward" title="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.forward"><code class="xref py py-func docutils literal notranslate"><span class="pre">forward()</span></code></a>.
It should not modify the input or output.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>hook</strong> (<em>callable</em>) – The forward hook function of form <cite>hook(block, input, output) -&gt; None</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.utils.HookHandle</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.register_forward_pre_hook">
<code class="sig-name descname">register_forward_pre_hook</code><span class="sig-paren">(</span><em class="sig-param">hook</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.register_forward_pre_hook" title="Permalink to this definition"></a></dt>
<dd><p>Registers a forward pre-hook on the block.</p>
<p>The hook function is called immediately before <a class="reference internal" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.forward" title="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.forward"><code class="xref py py-func docutils literal notranslate"><span class="pre">forward()</span></code></a>.
It should not modify the input or output.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>hook</strong> (<em>callable</em>) – The forward hook function of form <cite>hook(block, input) -&gt; None</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.utils.HookHandle</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.register_op_hook">
<code class="sig-name descname">register_op_hook</code><span class="sig-paren">(</span><em class="sig-param">callback</em>, <em class="sig-param">monitor_all=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.register_op_hook" title="Permalink to this definition"></a></dt>
<dd><p>Install op hook for block recursively.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>callback</strong> (<em>function</em>) – Function called to inspect the values of the intermediate outputs
of blocks after hybridization. It takes 3 parameters:
name of the tensor being inspected (str)
name of the operator producing or consuming that tensor (str)
tensor being inspected (NDArray).</p></li>
<li><p><strong>monitor_all</strong> (<em>bool</em><em>, </em><em>default False</em>) – If True, monitor both input and output, otherwise monitor output only.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.reset_ctx">
<code class="sig-name descname">reset_ctx</code><span class="sig-paren">(</span><em class="sig-param">ctx</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.reset_ctx" title="Permalink to this definition"></a></dt>
<dd><p>This function has been deprecated. Please refer to <code class="docutils literal notranslate"><span class="pre">HybridBlock.reset_device</span></code>.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.reset_device">
<code class="sig-name descname">reset_device</code><span class="sig-paren">(</span><em class="sig-param">device</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.reset_device" title="Permalink to this definition"></a></dt>
<dd><p>Re-assign all Parameters to other devices. If the Block is hybridized, it will reset the _cached_op_args.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>device</strong> (Device or list of Device, default <code class="xref py py-meth docutils literal notranslate"><span class="pre">device.current_device()</span></code>.) – Assign Parameter to given device. If device is a list of Device, a
copy will be made for each device.</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.save">
<code class="sig-name descname">save</code><span class="sig-paren">(</span><em class="sig-param">prefix</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.save" title="Permalink to this definition"></a></dt>
<dd><p>Save the model architecture and parameters to load again later</p>
<p>Saves the model architecture as a nested dictionary where each Block
in the model is a dictionary and its children are sub-dictionaries.</p>
<p>Each Block is uniquely identified by Block class name and a unique ID.
We save each Block’s parameter UUID to restore later in order to match
the saved parameters.</p>
<p>Recursively traverses a Block’s children in order (since its an
OrderedDict) and uses the unique ID to denote that specific Block.</p>
<p>Assumes that the model is created in an identical order every time.
If the model is not able to be recreated deterministically do not
use this set of APIs to save/load your model.</p>
<p>For HybridBlocks, the cached_graph is saved (Symbol &amp; inputs) if
it has already been hybridized.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>prefix</strong> (<em>str</em>) – The prefix to use in filenames for saving this model:
&lt;prefix&gt;-model.json and &lt;prefix&gt;-model.params</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.save_parameters">
<code class="sig-name descname">save_parameters</code><span class="sig-paren">(</span><em class="sig-param">filename</em>, <em class="sig-param">deduplicate=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.save_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Save parameters to file.</p>
<p>Saved parameters can only be loaded with <cite>load_parameters</cite>. Note that this
method only saves parameters, not model structure. If you want to save
model structures, please use <code class="xref py py-meth docutils literal notranslate"><span class="pre">HybridBlock.export()</span></code>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>filename</strong> (<em>str</em>) – Path to file.</p></li>
<li><p><strong>deduplicate</strong> (<em>bool</em><em>, </em><em>default False</em>) – If True, save shared parameters only once. Otherwise, if a Block
contains multiple sub-blocks that share parameters, each of the
shared parameters will be separately saved for every sub-block.</p></li>
</ul>
</dd>
</dl>
<p class="rubric">References</p>
<p><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.setattr">
<code class="sig-name descname">setattr</code><span class="sig-paren">(</span><em class="sig-param">name</em>, <em class="sig-param">value</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.setattr" title="Permalink to this definition"></a></dt>
<dd><p>Set an attribute to a new value for all Parameters.</p>
<p>For example, set grad_req to null if you don’t need gradient w.r.t a
model’s Parameters:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">setattr</span><span class="p">(</span><span class="s1">&#39;grad_req&#39;</span><span class="p">,</span> <span class="s1">&#39;null&#39;</span><span class="p">)</span>
</pre></div>
</div>
<p>or change the learning rate multiplier:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">setattr</span><span class="p">(</span><span class="s1">&#39;lr_mult&#39;</span><span class="p">,</span> <span class="mf">0.5</span><span class="p">)</span>
</pre></div>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>name</strong> (<em>str</em>) – Name of the attribute.</p></li>
<li><p><strong>value</strong> (<em>valid type for attribute name</em>) – The new value for the attribute.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.share_parameters">
<code class="sig-name descname">share_parameters</code><span class="sig-paren">(</span><em class="sig-param">shared</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.share_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Share parameters recursively inside the model.</p>
<p>For example, if you want <code class="docutils literal notranslate"><span class="pre">dense1</span></code> to share <code class="docutils literal notranslate"><span class="pre">dense0</span></code>’s weights, you can do:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">dense0</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">)</span>
<span class="n">dense1</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">)</span>
<span class="n">dense1</span><span class="o">.</span><span class="n">share_parameters</span><span class="p">(</span><span class="n">dense0</span><span class="o">.</span><span class="n">collect_params</span><span class="p">())</span>
</pre></div>
</div>
<dl class="simple">
<dt>which equals to</dt><dd><p>dense1.weight = dense0.weight
dense1.bias = dense0.bias</p>
</dd>
</dl>
<p>Note that unlike the <cite>load_parameters</cite> or <cite>load_dict</cite> functions,
<cite>share_parameters</cite> results in the <cite>Parameter</cite> object being shared (or
tied) between the models, whereas <cite>load_parameters</cite> or <cite>load_dict</cite> only
set the value of the data dictionary of a model. If you call
<cite>load_parameters</cite> or <cite>load_dict</cite> after <cite>share_parameters</cite>, the loaded
value will be reflected in all networks that use the shared (or tied)
<cite>Parameter</cite> object.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>shared</strong> (<em>Dict</em>) – Dict of the shared parameters.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>this block</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.summary">
<code class="sig-name descname">summary</code><span class="sig-paren">(</span><em class="sig-param">*inputs</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.summary" title="Permalink to this definition"></a></dt>
<dd><p>Print the summary of the model’s output and parameters.</p>
<p>The network must have been initialized, and must not have been hybridized.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>inputs</strong> (<em>object</em>) – Any input that the model supports. For any tensor in the input, only
<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.ndarray.NDArray</span></code></a> is supported.</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.zero_grad">
<code class="sig-name descname">zero_grad</code><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.zero_grad" title="Permalink to this definition"></a></dt>
<dd><p>Sets all Parameters’ gradient buffer to 0.</p>
</dd></dl>
</dd></dl>
<dl class="attribute">
<dt id="mxnet.gluon.loss.SigmoidBCELoss">
<code class="sig-name descname">SigmoidBCELoss</code><a class="headerlink" href="#mxnet.gluon.loss.SigmoidBCELoss" title="Permalink to this definition"></a></dt>
<table class="longtable docutils align-default">
<colgroup>
<col style="width: 10%" />
<col style="width: 90%" />
</colgroup>
<tbody>
<tr class="row-odd"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">params</span></code></p></td>
<td><p>Returns this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code>’s parameter dictionary (does not include its children’s parameters).</p></td>
</tr>
</tbody>
</table>
<p><strong>Methods</strong></p>
<table class="longtable docutils align-default">
<colgroup>
<col style="width: 10%" />
<col style="width: 90%" />
</colgroup>
<tbody>
<tr class="row-odd"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">apply</span></code>(fn)</p></td>
<td><p>Applies <code class="docutils literal notranslate"><span class="pre">fn</span></code> recursively to every child block as well as self.</p></td>
</tr>
<tr class="row-even"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">collect_params</span></code>([select])</p></td>
<td><p>Returns a <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> containing this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and all of its children’s Parameters(default), also can returns the select <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> which match some given regular expressions.</p></td>
</tr>
<tr class="row-odd"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">export</span></code>(path[, epoch, remove_amp_cast])</p></td>
<td><p>Export HybridBlock to json format that can be loaded by <cite>gluon.SymbolBlock.imports</cite> or the C++ interface.</p></td>
</tr>
<tr class="row-even"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">forward</span></code>(pred, label[, sample_weight, pos_weight])</p></td>
<td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td>
</tr>
<tr class="row-odd"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybridize</span></code>([active, partition_if_dynamic, …])</p></td>
<td><p>Activates or deactivates <code class="xref py py-class docutils literal notranslate"><span class="pre">HybridBlock</span></code> s recursively.</p></td>
</tr>
<tr class="row-even"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">infer_shape</span></code>(*args)</p></td>
<td><p>Infers shape of Parameters from inputs.</p></td>
</tr>
<tr class="row-odd"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">infer_type</span></code>(*args)</p></td>
<td><p>Infers data type of Parameters from inputs.</p></td>
</tr>
<tr class="row-even"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">initialize</span></code>([init, device, verbose, force_reinit])</p></td>
<td><p>Initializes <code class="xref py py-class docutils literal notranslate"><span class="pre">Parameter</span></code> s of this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and its children.</p></td>
</tr>
<tr class="row-odd"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">load</span></code>(prefix)</p></td>
<td><p>Load a model saved using the <cite>save</cite> API</p></td>
</tr>
<tr class="row-even"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">load_dict</span></code>(param_dict[, device, …])</p></td>
<td><p>Load parameters from dict</p></td>
</tr>
<tr class="row-odd"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">load_parameters</span></code>(filename[, device, …])</p></td>
<td><p>Load parameters from file previously saved by <cite>save_parameters</cite>.</p></td>
</tr>
<tr class="row-even"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">optimize_for</span></code>(x, *args[, backend, clear, …])</p></td>
<td><p>Partitions the current HybridBlock and optimizes it for a given backend without executing a forward pass.</p></td>
</tr>
<tr class="row-odd"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_forward_hook</span></code>(hook)</p></td>
<td><p>Registers a forward hook on the block.</p></td>
</tr>
<tr class="row-even"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_forward_pre_hook</span></code>(hook)</p></td>
<td><p>Registers a forward pre-hook on the block.</p></td>
</tr>
<tr class="row-odd"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_op_hook</span></code>(callback[, monitor_all])</p></td>
<td><p>Install op hook for block recursively.</p></td>
</tr>
<tr class="row-even"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">reset_ctx</span></code>(ctx)</p></td>
<td><p>This function has been deprecated.</p></td>
</tr>
<tr class="row-odd"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">reset_device</span></code>(device)</p></td>
<td><p>Re-assign all Parameters to other devices.</p></td>
</tr>
<tr class="row-even"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">save</span></code>(prefix)</p></td>
<td><p>Save the model architecture and parameters to load again later</p></td>
</tr>
<tr class="row-odd"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">save_parameters</span></code>(filename[, deduplicate])</p></td>
<td><p>Save parameters to file.</p></td>
</tr>
<tr class="row-even"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">setattr</span></code>(name, value)</p></td>
<td><p>Set an attribute to a new value for all Parameters.</p></td>
</tr>
<tr class="row-odd"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">share_parameters</span></code>(shared)</p></td>
<td><p>Share parameters recursively inside the model.</p></td>
</tr>
<tr class="row-even"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">summary</span></code>(*inputs)</p></td>
<td><p>Print the summary of the model’s output and parameters.</p></td>
</tr>
<tr class="row-odd"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">zero_grad</span></code>()</p></td>
<td><p>Sets all Parameters’ gradient buffer to 0.</p></td>
</tr>
</tbody>
</table>
<p><strong>Attributes</strong></p>
<dd><p>alias of <a class="reference internal" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss" title="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss</span></code></a></p>
</dd></dl>
<dl class="class">
<dt id="mxnet.gluon.loss.SoftmaxCrossEntropyLoss">
<em class="property">class </em><code class="sig-name descname">SoftmaxCrossEntropyLoss</code><span class="sig-paren">(</span><em class="sig-param">axis=-1</em>, <em class="sig-param">sparse_label=True</em>, <em class="sig-param">from_logits=False</em>, <em class="sig-param">weight=None</em>, <em class="sig-param">batch_axis=0</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#SoftmaxCrossEntropyLoss"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss" title="Permalink to this definition"></a></dt>
<dd><p>Bases: <a class="reference internal" href="#mxnet.gluon.loss.Loss" title="mxnet.gluon.loss.Loss"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.loss.Loss</span></code></a></p>
<p>Computes the softmax cross entropy loss. (alias: SoftmaxCELoss)</p>
<p>If <cite>sparse_label</cite> is <cite>True</cite> (default), label should contain integer
category indicators:</p>
<div class="math notranslate nohighlight">
\[ \begin{align}\begin{aligned}\DeclareMathOperator{softmax}{softmax}\\p = \softmax({pred})\\L = -\sum_i \log p_{i,{label}_i}\end{aligned}\end{align} \]</div>
<p><strong>Methods</strong></p>
<table class="longtable docutils align-default">
<colgroup>
<col style="width: 10%" />
<col style="width: 90%" />
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.apply" title="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.apply"><code class="xref py py-obj docutils literal notranslate"><span class="pre">apply</span></code></a>(fn)</p></td>
<td><p>Applies <code class="docutils literal notranslate"><span class="pre">fn</span></code> recursively to every child block as well as self.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.collect_params" title="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.collect_params"><code class="xref py py-obj docutils literal notranslate"><span class="pre">collect_params</span></code></a>([select])</p></td>
<td><p>Returns a <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> containing this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and all of its children’s Parameters(default), also can returns the select <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> which match some given regular expressions.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.export" title="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.export"><code class="xref py py-obj docutils literal notranslate"><span class="pre">export</span></code></a>(path[, epoch, remove_amp_cast])</p></td>
<td><p>Export HybridBlock to json format that can be loaded by <cite>gluon.SymbolBlock.imports</cite> or the C++ interface.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.forward" title="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">forward</span></code></a>(pred, label[, sample_weight])</p></td>
<td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.hybridize" title="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.hybridize"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybridize</span></code></a>([active, partition_if_dynamic, …])</p></td>
<td><p>Activates or deactivates <code class="xref py py-class docutils literal notranslate"><span class="pre">HybridBlock</span></code> s recursively.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.infer_shape" title="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.infer_shape"><code class="xref py py-obj docutils literal notranslate"><span class="pre">infer_shape</span></code></a>(*args)</p></td>
<td><p>Infers shape of Parameters from inputs.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.infer_type" title="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.infer_type"><code class="xref py py-obj docutils literal notranslate"><span class="pre">infer_type</span></code></a>(*args)</p></td>
<td><p>Infers data type of Parameters from inputs.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.initialize" title="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.initialize"><code class="xref py py-obj docutils literal notranslate"><span class="pre">initialize</span></code></a>([init, device, verbose, force_reinit])</p></td>
<td><p>Initializes <code class="xref py py-class docutils literal notranslate"><span class="pre">Parameter</span></code> s of this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and its children.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.load" title="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.load"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load</span></code></a>(prefix)</p></td>
<td><p>Load a model saved using the <cite>save</cite> API</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.load_dict" title="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.load_dict"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load_dict</span></code></a>(param_dict[, device, …])</p></td>
<td><p>Load parameters from dict</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.load_parameters" title="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.load_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load_parameters</span></code></a>(filename[, device, …])</p></td>
<td><p>Load parameters from file previously saved by <cite>save_parameters</cite>.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.optimize_for" title="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.optimize_for"><code class="xref py py-obj docutils literal notranslate"><span class="pre">optimize_for</span></code></a>(x, *args[, backend, clear, …])</p></td>
<td><p>Partitions the current HybridBlock and optimizes it for a given backend without executing a forward pass.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.register_forward_hook" title="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.register_forward_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_forward_hook</span></code></a>(hook)</p></td>
<td><p>Registers a forward hook on the block.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.register_forward_pre_hook" title="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.register_forward_pre_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_forward_pre_hook</span></code></a>(hook)</p></td>
<td><p>Registers a forward pre-hook on the block.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.register_op_hook" title="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.register_op_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_op_hook</span></code></a>(callback[, monitor_all])</p></td>
<td><p>Install op hook for block recursively.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.reset_ctx" title="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.reset_ctx"><code class="xref py py-obj docutils literal notranslate"><span class="pre">reset_ctx</span></code></a>(ctx)</p></td>
<td><p>This function has been deprecated.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.reset_device" title="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.reset_device"><code class="xref py py-obj docutils literal notranslate"><span class="pre">reset_device</span></code></a>(device)</p></td>
<td><p>Re-assign all Parameters to other devices.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.save" title="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.save"><code class="xref py py-obj docutils literal notranslate"><span class="pre">save</span></code></a>(prefix)</p></td>
<td><p>Save the model architecture and parameters to load again later</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.save_parameters" title="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.save_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">save_parameters</span></code></a>(filename[, deduplicate])</p></td>
<td><p>Save parameters to file.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.setattr" title="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.setattr"><code class="xref py py-obj docutils literal notranslate"><span class="pre">setattr</span></code></a>(name, value)</p></td>
<td><p>Set an attribute to a new value for all Parameters.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.share_parameters" title="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.share_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">share_parameters</span></code></a>(shared)</p></td>
<td><p>Share parameters recursively inside the model.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.summary" title="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.summary"><code class="xref py py-obj docutils literal notranslate"><span class="pre">summary</span></code></a>(*inputs)</p></td>
<td><p>Print the summary of the model’s output and parameters.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.zero_grad" title="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.zero_grad"><code class="xref py py-obj docutils literal notranslate"><span class="pre">zero_grad</span></code></a>()</p></td>
<td><p>Sets all Parameters’ gradient buffer to 0.</p></td>
</tr>
</tbody>
</table>
<p><strong>Attributes</strong></p>
<table class="longtable docutils align-default">
<colgroup>
<col style="width: 10%" />
<col style="width: 90%" />
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.params" title="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.params"><code class="xref py py-obj docutils literal notranslate"><span class="pre">params</span></code></a></p></td>
<td><p>Returns this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code>’s parameter dictionary (does not include its children’s parameters).</p></td>
</tr>
</tbody>
</table>
<p><cite>label</cite>’s shape should be <cite>pred</cite>’s shape with the <cite>axis</cite> dimension removed.
i.e. for <cite>pred</cite> with shape (1,2,3,4) and <cite>axis = 2</cite>, <cite>label</cite>’s shape should
be (1,2,4).</p>
<p>If <cite>sparse_label</cite> is <cite>False</cite>, <cite>label</cite> should contain probability distribution
and <cite>label</cite>’s shape should be the same with <cite>pred</cite>:</p>
<div class="math notranslate nohighlight">
\[ \begin{align}\begin{aligned}p = \softmax({pred})\\L = -\sum_i \sum_j {label}_j \log p_{ij}\end{aligned}\end{align} \]</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>axis</strong> (<em>int</em><em>, </em><em>default -1</em>) – The axis to sum over when computing softmax and entropy.</p></li>
<li><p><strong>sparse_label</strong> (<em>bool</em><em>, </em><em>default True</em>) – Whether label is an integer array instead of probability distribution.</p></li>
<li><p><strong>from_logits</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether input is a log probability (usually from log_softmax) instead
of unnormalized numbers.</p></li>
<li><p><strong>weight</strong> (<em>float</em><em> or </em><em>None</em>) – Global scalar weight for loss.</p></li>
<li><p><strong>batch_axis</strong> (<em>int</em><em>, </em><em>default 0</em>) – The axis that represents mini-batch.</p></li>
</ul>
</dd>
</dl>
<dl class="simple">
<dt>Inputs:</dt><dd><ul class="simple">
<li><p><strong>pred</strong>: the prediction tensor, where the <cite>batch_axis</cite> dimension
ranges over batch size and <cite>axis</cite> dimension ranges over the number
of classes.</p></li>
<li><p><strong>label</strong>: the truth tensor. When <cite>sparse_label</cite> is True, <cite>label</cite>’s
shape should be <cite>pred</cite>’s shape with the <cite>axis</cite> dimension removed.
i.e. for <cite>pred</cite> with shape (1,2,3,4) and <cite>axis = 2</cite>, <cite>label</cite>’s shape
should be (1,2,4) and values should be integers between 0 and 2. If
<cite>sparse_label</cite> is False, <cite>label</cite>’s shape must be the same as <cite>pred</cite>
and values should be floats in the range <cite>[0, 1]</cite>.</p></li>
<li><p><strong>sample_weight</strong>: element-wise weighting tensor. Must be broadcastable
to the same shape as pred. For example, if pred has shape (64, 10)
and you want to weigh each sample in the batch separately,
sample_weight should have shape (64, 1).</p></li>
</ul>
</dd>
<dt>Outputs:</dt><dd><ul class="simple">
<li><p><strong>loss</strong>: loss tensor with shape (batch_size,). Dimenions other than
batch_axis are averaged out.</p></li>
</ul>
</dd>
</dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.apply">
<code class="sig-name descname">apply</code><span class="sig-paren">(</span><em class="sig-param">fn</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.apply" title="Permalink to this definition"></a></dt>
<dd><p>Applies <code class="docutils literal notranslate"><span class="pre">fn</span></code> recursively to every child block as well as self.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>fn</strong> (<em>callable</em>) – Function to be applied to each submodule, of form <cite>fn(block)</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>this block</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.collect_params">
<code class="sig-name descname">collect_params</code><span class="sig-paren">(</span><em class="sig-param">select=None</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.collect_params" title="Permalink to this definition"></a></dt>
<dd><p>Returns a <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> containing this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and all of its
children’s Parameters(default), also can returns the select <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code>
which match some given regular expressions.</p>
<p>For example, collect the specified parameters in [‘conv1.weight’, ‘conv1.bias’, ‘fc.weight’,
‘fc.bias’]:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">collect_params</span><span class="p">(</span><span class="s1">&#39;conv1.weight|conv1.bias|fc.weight|fc.bias&#39;</span><span class="p">)</span>
</pre></div>
</div>
<p>or collect all parameters whose names end with ‘weight’ or ‘bias’, this can be done
using regular expressions:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">collect_params</span><span class="p">(</span><span class="s1">&#39;.*weight|.*bias&#39;</span><span class="p">)</span>
</pre></div>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>select</strong> (<em>str</em>) – regular expressions</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>The selected <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.export">
<code class="sig-name descname">export</code><span class="sig-paren">(</span><em class="sig-param">path</em>, <em class="sig-param">epoch=0</em>, <em class="sig-param">remove_amp_cast=True</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.export" title="Permalink to this definition"></a></dt>
<dd><p>Export HybridBlock to json format that can be loaded by
<cite>gluon.SymbolBlock.imports</cite> or the C++ interface.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>When there are only one input, it will have name <cite>data</cite>. When there
Are more than one inputs, they will be named as <cite>data0</cite>, <cite>data1</cite>, etc.</p>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>path</strong> (<em>str</em><em> or </em><em>None</em>) – Path to save model. Two files <cite>path-symbol.json</cite> and <cite>path-xxxx.params</cite>
will be created, where xxxx is the 4 digits epoch number.
If None, do not export to file but return Python Symbol object and
corresponding dictionary of parameters.</p></li>
<li><p><strong>epoch</strong> (<em>int</em>) – Epoch number of saved model.</p></li>
<li><p><strong>remove_amp_cast</strong> (<em>bool</em><em>, </em><em>optional</em>) – Whether to remove the amp_cast and amp_multicast operators, before saving the model.</p></li>
</ul>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p><ul class="simple">
<li><p><strong>symbol_filename</strong> (<em>str</em>) – Filename to which model symbols were saved, including <cite>path</cite> prefix.</p></li>
<li><p><strong>params_filename</strong> (<em>str</em>) – Filename to which model parameters were saved, including <cite>path</cite> prefix.</p></li>
</ul>
</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.forward">
<code class="sig-name descname">forward</code><span class="sig-paren">(</span><em class="sig-param">pred</em>, <em class="sig-param">label</em>, <em class="sig-param">sample_weight=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#SoftmaxCrossEntropyLoss.forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.forward" title="Permalink to this definition"></a></dt>
<dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>x</strong> (<a class="reference internal" href="../../legacy/symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li>
<li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.hybridize">
<code class="sig-name descname">hybridize</code><span class="sig-paren">(</span><em class="sig-param">active=True</em>, <em class="sig-param">partition_if_dynamic=True</em>, <em class="sig-param">static_alloc=False</em>, <em class="sig-param">static_shape=False</em>, <em class="sig-param">inline_limit=2</em>, <em class="sig-param">forward_bulk_size=None</em>, <em class="sig-param">backward_bulk_size=None</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.hybridize" title="Permalink to this definition"></a></dt>
<dd><p>Activates or deactivates <code class="xref py py-class docutils literal notranslate"><span class="pre">HybridBlock</span></code> s recursively. Has no effect on
non-hybrid children.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>active</strong> (<em>bool</em><em>, </em><em>default True</em>) – Whether to turn hybrid on or off.</p></li>
<li><p><strong>partition_if_dynamic</strong> (<em>bool</em><em>, </em><em>default False</em>) – whether to partition the graph when dynamic shape op exists</p></li>
<li><p><strong>static_alloc</strong> (<em>bool</em><em>, </em><em>default False</em>) – Statically allocate memory to improve speed. Memory usage may increase.</p></li>
<li><p><strong>static_shape</strong> (<em>bool</em><em>, </em><em>default False</em>) – Optimize for invariant input shapes between iterations. Must also
set static_alloc to True. Change of input shapes is still allowed
but slower.</p></li>
<li><p><strong>inline_limit</strong> (<em>optional int</em><em>, </em><em>default 2</em>) – Maximum number of operators that can be inlined.</p></li>
<li><p><strong>forward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during forward pass.</p></li>
<li><p><strong>backward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during backward pass.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.infer_shape">
<code class="sig-name descname">infer_shape</code><span class="sig-paren">(</span><em class="sig-param">*args</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.infer_shape" title="Permalink to this definition"></a></dt>
<dd><p>Infers shape of Parameters from inputs.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.infer_type">
<code class="sig-name descname">infer_type</code><span class="sig-paren">(</span><em class="sig-param">*args</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.infer_type" title="Permalink to this definition"></a></dt>
<dd><p>Infers data type of Parameters from inputs.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.initialize">
<code class="sig-name descname">initialize</code><span class="sig-paren">(</span><em class="sig-param">init=&lt;mxnet.initializer.Uniform object&gt;</em>, <em class="sig-param">device=None</em>, <em class="sig-param">verbose=False</em>, <em class="sig-param">force_reinit=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.initialize" title="Permalink to this definition"></a></dt>
<dd><p>Initializes <code class="xref py py-class docutils literal notranslate"><span class="pre">Parameter</span></code> s of this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and its children.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>init</strong> (<a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Global default Initializer to be used when <code class="xref py py-meth docutils literal notranslate"><span class="pre">Parameter.init()</span></code> is <code class="docutils literal notranslate"><span class="pre">None</span></code>.
Otherwise, <code class="xref py py-meth docutils literal notranslate"><span class="pre">Parameter.init()</span></code> takes precedence.</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em> or </em><em>list of Device</em>) – Keeps a copy of Parameters on one or many device(s).</p></li>
<li><p><strong>verbose</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to verbosely print out details on initialization.</p></li>
<li><p><strong>force_reinit</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to force re-initialization if parameter is already initialized.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.load">
<code class="sig-name descname">load</code><span class="sig-paren">(</span><em class="sig-param">prefix</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.load" title="Permalink to this definition"></a></dt>
<dd><p>Load a model saved using the <cite>save</cite> API</p>
<p>Reconfigures a model using the saved configuration. This function
does not regenerate the model architecture. It resets each Block’s
parameter UUIDs as they were when saved in order to match the names of the
saved parameters.</p>
<p>This function assumes the Blocks in the model were created in the same
order they were when the model was saved. This is because each Block is
uniquely identified by Block class name and a unique ID in order (since
its an OrderedDict) and uses the unique ID to denote that specific Block.</p>
<p>Assumes that the model is created in an identical order every time.
If the model is not able to be recreated deterministically do not
use this set of APIs to save/load your model.</p>
<p>For HybridBlocks, the cached_graph (Symbol &amp; inputs) and settings are
restored if it had been hybridized before saving.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>prefix</strong> (<em>str</em>) – The prefix to use in filenames for loading this model:
&lt;prefix&gt;-model.json and &lt;prefix&gt;-model.params</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.load_dict">
<code class="sig-name descname">load_dict</code><span class="sig-paren">(</span><em class="sig-param">param_dict</em>, <em class="sig-param">device=None</em>, <em class="sig-param">allow_missing=False</em>, <em class="sig-param">ignore_extra=False</em>, <em class="sig-param">cast_dtype=False</em>, <em class="sig-param">dtype_source='current'</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.load_dict" title="Permalink to this definition"></a></dt>
<dd><p>Load parameters from dict</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>param_dict</strong> (<em>dict</em>) – Dictionary containing model parameters</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em>, </em><em>optional</em>) – Device context on which the memory is allocated. Default is
<cite>mxnet.device.current_device()</cite>.</p></li>
<li><p><strong>allow_missing</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently skip loading parameters not represented in the file.</p></li>
<li><p><strong>ignore_extra</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently ignore parameters from the file that are not
present in this dict.</p></li>
<li><p><strong>cast_dtype</strong> (<em>bool</em><em>, </em><em>default False</em>) – Cast the data type of the NDArray loaded from the checkpoint to the dtype
provided by the Parameter if any</p></li>
<li><p><strong>dtype_source</strong> (<em>str</em><em>, </em><em>default 'current'</em>) – must be in {‘current’, ‘saved’}
Only valid if cast_dtype=True, specify the source of the dtype for casting
the parameters</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.load_parameters">
<code class="sig-name descname">load_parameters</code><span class="sig-paren">(</span><em class="sig-param">filename</em>, <em class="sig-param">device=None</em>, <em class="sig-param">allow_missing=False</em>, <em class="sig-param">ignore_extra=False</em>, <em class="sig-param">cast_dtype=False</em>, <em class="sig-param">dtype_source='current'</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.load_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Load parameters from file previously saved by <cite>save_parameters</cite>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>filename</strong> (<em>str</em>) – Path to parameter file.</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em> or </em><em>list of Device</em><em>, </em><em>default cpu</em><em>(</em><em>)</em>) – Device(s) to initialize loaded parameters on.</p></li>
<li><p><strong>allow_missing</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently skip loading parameters not represents in the file.</p></li>
<li><p><strong>ignore_extra</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently ignore parameters from the file that are not
present in this Block.</p></li>
<li><p><strong>cast_dtype</strong> (<em>bool</em><em>, </em><em>default False</em>) – Cast the data type of the NDArray loaded from the checkpoint to the dtype
provided by the Parameter if any.</p></li>
<li><p><strong>dtype_source</strong> (<em>str</em><em>, </em><em>default 'current'</em>) – must be in {‘current’, ‘saved’}
Only valid if cast_dtype=True, specify the source of the dtype for casting
the parameters</p></li>
</ul>
</dd>
</dl>
<p class="rubric">References</p>
<p><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.optimize_for">
<code class="sig-name descname">optimize_for</code><span class="sig-paren">(</span><em class="sig-param">x</em>, <em class="sig-param">*args</em>, <em class="sig-param">backend=None</em>, <em class="sig-param">clear=False</em>, <em class="sig-param">partition_if_dynamic=True</em>, <em class="sig-param">static_alloc=False</em>, <em class="sig-param">static_shape=False</em>, <em class="sig-param">inline_limit=2</em>, <em class="sig-param">forward_bulk_size=None</em>, <em class="sig-param">backward_bulk_size=None</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.optimize_for" title="Permalink to this definition"></a></dt>
<dd><p>Partitions the current HybridBlock and optimizes it for a given backend
without executing a forward pass. Modifies the HybridBlock in-place.</p>
<p>Immediately partitions a HybridBlock using the specified backend. Combines
the work done in the hybridize API with part of the work done in the forward
pass without calling the CachedOp. Can be used in place of hybridize,
afterwards <cite>export</cite> can be called or inference can be run. See README.md in
example/extensions/lib_subgraph/README.md for more details.</p>
<p class="rubric">Examples</p>
<p># partition and then export to file
block.optimize_for(x, backend=’myPart’)
block.export(‘partitioned’)</p>
<p># partition and then run inference
block.optimize_for(x, backend=’myPart’)
block(x)</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>x</strong> (<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – first input to model</p></li>
<li><p><strong>*args</strong> (<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – other inputs to model</p></li>
<li><p><strong>backend</strong> (<em>str</em>) – The name of backend, as registered in <cite>SubgraphBackendRegistry</cite>, default None</p></li>
<li><p><strong>backend_opts</strong> (<em>dict of user-specified options to pass to the backend for partitioning</em><em>, </em><em>optional</em>) – Passed on to <cite>PrePartition</cite> and <cite>PostPartition</cite> functions of <cite>SubgraphProperty</cite></p></li>
<li><p><strong>clear</strong> (<em>bool</em><em>, </em><em>default False</em>) – clears any previous optimizations</p></li>
<li><p><strong>partition_if_dynamic</strong> (<em>bool</em><em>, </em><em>default False</em>) – whether to partition the graph when dynamic shape op exists</p></li>
<li><p><strong>static_alloc</strong> (<em>bool</em><em>, </em><em>default False</em>) – Statically allocate memory to improve speed. Memory usage may increase.</p></li>
<li><p><strong>static_shape</strong> (<em>bool</em><em>, </em><em>default False</em>) – Optimize for invariant input shapes between iterations. Must also
set static_alloc to True. Change of input shapes is still allowed
but slower.</p></li>
<li><p><strong>inline_limit</strong> (<em>optional int</em><em>, </em><em>default 2</em>) – Maximum number of operators that can be inlined.</p></li>
<li><p><strong>forward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during forward pass.</p></li>
<li><p><strong>backward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during backward pass.</p></li>
<li><p><strong>**kwargs</strong> (<em>The backend options</em><em>, </em><em>optional</em>) – Passed on to <cite>PrePartition</cite> and <cite>PostPartition</cite> functions of <cite>SubgraphProperty</cite></p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.params">
<em class="property">property </em><code class="sig-name descname">params</code><a class="headerlink" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.params" title="Permalink to this definition"></a></dt>
<dd><p>Returns this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code>’s parameter dictionary (does not include its
children’s parameters).</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.register_forward_hook">
<code class="sig-name descname">register_forward_hook</code><span class="sig-paren">(</span><em class="sig-param">hook</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.register_forward_hook" title="Permalink to this definition"></a></dt>
<dd><p>Registers a forward hook on the block.</p>
<p>The hook function is called immediately after <a class="reference internal" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.forward" title="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.forward"><code class="xref py py-func docutils literal notranslate"><span class="pre">forward()</span></code></a>.
It should not modify the input or output.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>hook</strong> (<em>callable</em>) – The forward hook function of form <cite>hook(block, input, output) -&gt; None</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.utils.HookHandle</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.register_forward_pre_hook">
<code class="sig-name descname">register_forward_pre_hook</code><span class="sig-paren">(</span><em class="sig-param">hook</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.register_forward_pre_hook" title="Permalink to this definition"></a></dt>
<dd><p>Registers a forward pre-hook on the block.</p>
<p>The hook function is called immediately before <a class="reference internal" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.forward" title="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.forward"><code class="xref py py-func docutils literal notranslate"><span class="pre">forward()</span></code></a>.
It should not modify the input or output.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>hook</strong> (<em>callable</em>) – The forward hook function of form <cite>hook(block, input) -&gt; None</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.utils.HookHandle</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.register_op_hook">
<code class="sig-name descname">register_op_hook</code><span class="sig-paren">(</span><em class="sig-param">callback</em>, <em class="sig-param">monitor_all=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.register_op_hook" title="Permalink to this definition"></a></dt>
<dd><p>Install op hook for block recursively.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>callback</strong> (<em>function</em>) – Function called to inspect the values of the intermediate outputs
of blocks after hybridization. It takes 3 parameters:
name of the tensor being inspected (str)
name of the operator producing or consuming that tensor (str)
tensor being inspected (NDArray).</p></li>
<li><p><strong>monitor_all</strong> (<em>bool</em><em>, </em><em>default False</em>) – If True, monitor both input and output, otherwise monitor output only.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.reset_ctx">
<code class="sig-name descname">reset_ctx</code><span class="sig-paren">(</span><em class="sig-param">ctx</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.reset_ctx" title="Permalink to this definition"></a></dt>
<dd><p>This function has been deprecated. Please refer to <code class="docutils literal notranslate"><span class="pre">HybridBlock.reset_device</span></code>.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.reset_device">
<code class="sig-name descname">reset_device</code><span class="sig-paren">(</span><em class="sig-param">device</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.reset_device" title="Permalink to this definition"></a></dt>
<dd><p>Re-assign all Parameters to other devices. If the Block is hybridized, it will reset the _cached_op_args.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>device</strong> (Device or list of Device, default <code class="xref py py-meth docutils literal notranslate"><span class="pre">device.current_device()</span></code>.) – Assign Parameter to given device. If device is a list of Device, a
copy will be made for each device.</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.save">
<code class="sig-name descname">save</code><span class="sig-paren">(</span><em class="sig-param">prefix</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.save" title="Permalink to this definition"></a></dt>
<dd><p>Save the model architecture and parameters to load again later</p>
<p>Saves the model architecture as a nested dictionary where each Block
in the model is a dictionary and its children are sub-dictionaries.</p>
<p>Each Block is uniquely identified by Block class name and a unique ID.
We save each Block’s parameter UUID to restore later in order to match
the saved parameters.</p>
<p>Recursively traverses a Block’s children in order (since its an
OrderedDict) and uses the unique ID to denote that specific Block.</p>
<p>Assumes that the model is created in an identical order every time.
If the model is not able to be recreated deterministically do not
use this set of APIs to save/load your model.</p>
<p>For HybridBlocks, the cached_graph is saved (Symbol &amp; inputs) if
it has already been hybridized.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>prefix</strong> (<em>str</em>) – The prefix to use in filenames for saving this model:
&lt;prefix&gt;-model.json and &lt;prefix&gt;-model.params</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.save_parameters">
<code class="sig-name descname">save_parameters</code><span class="sig-paren">(</span><em class="sig-param">filename</em>, <em class="sig-param">deduplicate=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.save_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Save parameters to file.</p>
<p>Saved parameters can only be loaded with <cite>load_parameters</cite>. Note that this
method only saves parameters, not model structure. If you want to save
model structures, please use <code class="xref py py-meth docutils literal notranslate"><span class="pre">HybridBlock.export()</span></code>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>filename</strong> (<em>str</em>) – Path to file.</p></li>
<li><p><strong>deduplicate</strong> (<em>bool</em><em>, </em><em>default False</em>) – If True, save shared parameters only once. Otherwise, if a Block
contains multiple sub-blocks that share parameters, each of the
shared parameters will be separately saved for every sub-block.</p></li>
</ul>
</dd>
</dl>
<p class="rubric">References</p>
<p><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.setattr">
<code class="sig-name descname">setattr</code><span class="sig-paren">(</span><em class="sig-param">name</em>, <em class="sig-param">value</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.setattr" title="Permalink to this definition"></a></dt>
<dd><p>Set an attribute to a new value for all Parameters.</p>
<p>For example, set grad_req to null if you don’t need gradient w.r.t a
model’s Parameters:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">setattr</span><span class="p">(</span><span class="s1">&#39;grad_req&#39;</span><span class="p">,</span> <span class="s1">&#39;null&#39;</span><span class="p">)</span>
</pre></div>
</div>
<p>or change the learning rate multiplier:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">setattr</span><span class="p">(</span><span class="s1">&#39;lr_mult&#39;</span><span class="p">,</span> <span class="mf">0.5</span><span class="p">)</span>
</pre></div>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>name</strong> (<em>str</em>) – Name of the attribute.</p></li>
<li><p><strong>value</strong> (<em>valid type for attribute name</em>) – The new value for the attribute.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.share_parameters">
<code class="sig-name descname">share_parameters</code><span class="sig-paren">(</span><em class="sig-param">shared</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.share_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Share parameters recursively inside the model.</p>
<p>For example, if you want <code class="docutils literal notranslate"><span class="pre">dense1</span></code> to share <code class="docutils literal notranslate"><span class="pre">dense0</span></code>’s weights, you can do:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">dense0</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">)</span>
<span class="n">dense1</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">)</span>
<span class="n">dense1</span><span class="o">.</span><span class="n">share_parameters</span><span class="p">(</span><span class="n">dense0</span><span class="o">.</span><span class="n">collect_params</span><span class="p">())</span>
</pre></div>
</div>
<dl class="simple">
<dt>which equals to</dt><dd><p>dense1.weight = dense0.weight
dense1.bias = dense0.bias</p>
</dd>
</dl>
<p>Note that unlike the <cite>load_parameters</cite> or <cite>load_dict</cite> functions,
<cite>share_parameters</cite> results in the <cite>Parameter</cite> object being shared (or
tied) between the models, whereas <cite>load_parameters</cite> or <cite>load_dict</cite> only
set the value of the data dictionary of a model. If you call
<cite>load_parameters</cite> or <cite>load_dict</cite> after <cite>share_parameters</cite>, the loaded
value will be reflected in all networks that use the shared (or tied)
<cite>Parameter</cite> object.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>shared</strong> (<em>Dict</em>) – Dict of the shared parameters.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>this block</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.summary">
<code class="sig-name descname">summary</code><span class="sig-paren">(</span><em class="sig-param">*inputs</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.summary" title="Permalink to this definition"></a></dt>
<dd><p>Print the summary of the model’s output and parameters.</p>
<p>The network must have been initialized, and must not have been hybridized.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>inputs</strong> (<em>object</em>) – Any input that the model supports. For any tensor in the input, only
<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.ndarray.NDArray</span></code></a> is supported.</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.zero_grad">
<code class="sig-name descname">zero_grad</code><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.zero_grad" title="Permalink to this definition"></a></dt>
<dd><p>Sets all Parameters’ gradient buffer to 0.</p>
</dd></dl>
</dd></dl>
<dl class="attribute">
<dt id="mxnet.gluon.loss.SoftmaxCELoss">
<code class="sig-name descname">SoftmaxCELoss</code><a class="headerlink" href="#mxnet.gluon.loss.SoftmaxCELoss" title="Permalink to this definition"></a></dt>
<table class="longtable docutils align-default">
<colgroup>
<col style="width: 10%" />
<col style="width: 90%" />
</colgroup>
<tbody>
<tr class="row-odd"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">params</span></code></p></td>
<td><p>Returns this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code>’s parameter dictionary (does not include its children’s parameters).</p></td>
</tr>
</tbody>
</table>
<p><strong>Methods</strong></p>
<table class="longtable docutils align-default">
<colgroup>
<col style="width: 10%" />
<col style="width: 90%" />
</colgroup>
<tbody>
<tr class="row-odd"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">apply</span></code>(fn)</p></td>
<td><p>Applies <code class="docutils literal notranslate"><span class="pre">fn</span></code> recursively to every child block as well as self.</p></td>
</tr>
<tr class="row-even"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">collect_params</span></code>([select])</p></td>
<td><p>Returns a <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> containing this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and all of its children’s Parameters(default), also can returns the select <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> which match some given regular expressions.</p></td>
</tr>
<tr class="row-odd"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">export</span></code>(path[, epoch, remove_amp_cast])</p></td>
<td><p>Export HybridBlock to json format that can be loaded by <cite>gluon.SymbolBlock.imports</cite> or the C++ interface.</p></td>
</tr>
<tr class="row-even"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">forward</span></code>(pred, label[, sample_weight])</p></td>
<td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td>
</tr>
<tr class="row-odd"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybridize</span></code>([active, partition_if_dynamic, …])</p></td>
<td><p>Activates or deactivates <code class="xref py py-class docutils literal notranslate"><span class="pre">HybridBlock</span></code> s recursively.</p></td>
</tr>
<tr class="row-even"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">infer_shape</span></code>(*args)</p></td>
<td><p>Infers shape of Parameters from inputs.</p></td>
</tr>
<tr class="row-odd"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">infer_type</span></code>(*args)</p></td>
<td><p>Infers data type of Parameters from inputs.</p></td>
</tr>
<tr class="row-even"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">initialize</span></code>([init, device, verbose, force_reinit])</p></td>
<td><p>Initializes <code class="xref py py-class docutils literal notranslate"><span class="pre">Parameter</span></code> s of this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and its children.</p></td>
</tr>
<tr class="row-odd"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">load</span></code>(prefix)</p></td>
<td><p>Load a model saved using the <cite>save</cite> API</p></td>
</tr>
<tr class="row-even"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">load_dict</span></code>(param_dict[, device, …])</p></td>
<td><p>Load parameters from dict</p></td>
</tr>
<tr class="row-odd"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">load_parameters</span></code>(filename[, device, …])</p></td>
<td><p>Load parameters from file previously saved by <cite>save_parameters</cite>.</p></td>
</tr>
<tr class="row-even"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">optimize_for</span></code>(x, *args[, backend, clear, …])</p></td>
<td><p>Partitions the current HybridBlock and optimizes it for a given backend without executing a forward pass.</p></td>
</tr>
<tr class="row-odd"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_forward_hook</span></code>(hook)</p></td>
<td><p>Registers a forward hook on the block.</p></td>
</tr>
<tr class="row-even"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_forward_pre_hook</span></code>(hook)</p></td>
<td><p>Registers a forward pre-hook on the block.</p></td>
</tr>
<tr class="row-odd"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_op_hook</span></code>(callback[, monitor_all])</p></td>
<td><p>Install op hook for block recursively.</p></td>
</tr>
<tr class="row-even"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">reset_ctx</span></code>(ctx)</p></td>
<td><p>This function has been deprecated.</p></td>
</tr>
<tr class="row-odd"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">reset_device</span></code>(device)</p></td>
<td><p>Re-assign all Parameters to other devices.</p></td>
</tr>
<tr class="row-even"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">save</span></code>(prefix)</p></td>
<td><p>Save the model architecture and parameters to load again later</p></td>
</tr>
<tr class="row-odd"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">save_parameters</span></code>(filename[, deduplicate])</p></td>
<td><p>Save parameters to file.</p></td>
</tr>
<tr class="row-even"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">setattr</span></code>(name, value)</p></td>
<td><p>Set an attribute to a new value for all Parameters.</p></td>
</tr>
<tr class="row-odd"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">share_parameters</span></code>(shared)</p></td>
<td><p>Share parameters recursively inside the model.</p></td>
</tr>
<tr class="row-even"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">summary</span></code>(*inputs)</p></td>
<td><p>Print the summary of the model’s output and parameters.</p></td>
</tr>
<tr class="row-odd"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">zero_grad</span></code>()</p></td>
<td><p>Sets all Parameters’ gradient buffer to 0.</p></td>
</tr>
</tbody>
</table>
<p><strong>Attributes</strong></p>
<dd><p>alias of <a class="reference internal" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss" title="mxnet.gluon.loss.SoftmaxCrossEntropyLoss"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.loss.SoftmaxCrossEntropyLoss</span></code></a></p>
</dd></dl>
<dl class="class">
<dt id="mxnet.gluon.loss.KLDivLoss">
<em class="property">class </em><code class="sig-name descname">KLDivLoss</code><span class="sig-paren">(</span><em class="sig-param">from_logits=True</em>, <em class="sig-param">axis=-1</em>, <em class="sig-param">weight=None</em>, <em class="sig-param">batch_axis=0</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#KLDivLoss"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.KLDivLoss" title="Permalink to this definition"></a></dt>
<dd><p>Bases: <a class="reference internal" href="#mxnet.gluon.loss.Loss" title="mxnet.gluon.loss.Loss"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.loss.Loss</span></code></a></p>
<p>The Kullback-Leibler divergence loss.</p>
<p>KL divergence measures the distance between contiguous distributions. It
can be used to minimize information loss when approximating a distribution.
If <cite>from_logits</cite> is True (default), loss is defined as:</p>
<div class="math notranslate nohighlight">
\[L = \sum_i {label}_i * \big[\log({label}_i) - {pred}_i\big]\]</div>
<p><strong>Methods</strong></p>
<table class="longtable docutils align-default">
<colgroup>
<col style="width: 10%" />
<col style="width: 90%" />
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.KLDivLoss.apply" title="mxnet.gluon.loss.KLDivLoss.apply"><code class="xref py py-obj docutils literal notranslate"><span class="pre">apply</span></code></a>(fn)</p></td>
<td><p>Applies <code class="docutils literal notranslate"><span class="pre">fn</span></code> recursively to every child block as well as self.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.KLDivLoss.collect_params" title="mxnet.gluon.loss.KLDivLoss.collect_params"><code class="xref py py-obj docutils literal notranslate"><span class="pre">collect_params</span></code></a>([select])</p></td>
<td><p>Returns a <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> containing this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and all of its children’s Parameters(default), also can returns the select <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> which match some given regular expressions.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.KLDivLoss.export" title="mxnet.gluon.loss.KLDivLoss.export"><code class="xref py py-obj docutils literal notranslate"><span class="pre">export</span></code></a>(path[, epoch, remove_amp_cast])</p></td>
<td><p>Export HybridBlock to json format that can be loaded by <cite>gluon.SymbolBlock.imports</cite> or the C++ interface.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.KLDivLoss.forward" title="mxnet.gluon.loss.KLDivLoss.forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">forward</span></code></a>(pred, label[, sample_weight])</p></td>
<td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.KLDivLoss.hybridize" title="mxnet.gluon.loss.KLDivLoss.hybridize"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybridize</span></code></a>([active, partition_if_dynamic, …])</p></td>
<td><p>Activates or deactivates <code class="xref py py-class docutils literal notranslate"><span class="pre">HybridBlock</span></code> s recursively.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.KLDivLoss.infer_shape" title="mxnet.gluon.loss.KLDivLoss.infer_shape"><code class="xref py py-obj docutils literal notranslate"><span class="pre">infer_shape</span></code></a>(*args)</p></td>
<td><p>Infers shape of Parameters from inputs.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.KLDivLoss.infer_type" title="mxnet.gluon.loss.KLDivLoss.infer_type"><code class="xref py py-obj docutils literal notranslate"><span class="pre">infer_type</span></code></a>(*args)</p></td>
<td><p>Infers data type of Parameters from inputs.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.KLDivLoss.initialize" title="mxnet.gluon.loss.KLDivLoss.initialize"><code class="xref py py-obj docutils literal notranslate"><span class="pre">initialize</span></code></a>([init, device, verbose, force_reinit])</p></td>
<td><p>Initializes <code class="xref py py-class docutils literal notranslate"><span class="pre">Parameter</span></code> s of this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and its children.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.KLDivLoss.load" title="mxnet.gluon.loss.KLDivLoss.load"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load</span></code></a>(prefix)</p></td>
<td><p>Load a model saved using the <cite>save</cite> API</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.KLDivLoss.load_dict" title="mxnet.gluon.loss.KLDivLoss.load_dict"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load_dict</span></code></a>(param_dict[, device, …])</p></td>
<td><p>Load parameters from dict</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.KLDivLoss.load_parameters" title="mxnet.gluon.loss.KLDivLoss.load_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load_parameters</span></code></a>(filename[, device, …])</p></td>
<td><p>Load parameters from file previously saved by <cite>save_parameters</cite>.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.KLDivLoss.optimize_for" title="mxnet.gluon.loss.KLDivLoss.optimize_for"><code class="xref py py-obj docutils literal notranslate"><span class="pre">optimize_for</span></code></a>(x, *args[, backend, clear, …])</p></td>
<td><p>Partitions the current HybridBlock and optimizes it for a given backend without executing a forward pass.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.KLDivLoss.register_forward_hook" title="mxnet.gluon.loss.KLDivLoss.register_forward_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_forward_hook</span></code></a>(hook)</p></td>
<td><p>Registers a forward hook on the block.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.KLDivLoss.register_forward_pre_hook" title="mxnet.gluon.loss.KLDivLoss.register_forward_pre_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_forward_pre_hook</span></code></a>(hook)</p></td>
<td><p>Registers a forward pre-hook on the block.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.KLDivLoss.register_op_hook" title="mxnet.gluon.loss.KLDivLoss.register_op_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_op_hook</span></code></a>(callback[, monitor_all])</p></td>
<td><p>Install op hook for block recursively.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.KLDivLoss.reset_ctx" title="mxnet.gluon.loss.KLDivLoss.reset_ctx"><code class="xref py py-obj docutils literal notranslate"><span class="pre">reset_ctx</span></code></a>(ctx)</p></td>
<td><p>This function has been deprecated.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.KLDivLoss.reset_device" title="mxnet.gluon.loss.KLDivLoss.reset_device"><code class="xref py py-obj docutils literal notranslate"><span class="pre">reset_device</span></code></a>(device)</p></td>
<td><p>Re-assign all Parameters to other devices.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.KLDivLoss.save" title="mxnet.gluon.loss.KLDivLoss.save"><code class="xref py py-obj docutils literal notranslate"><span class="pre">save</span></code></a>(prefix)</p></td>
<td><p>Save the model architecture and parameters to load again later</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.KLDivLoss.save_parameters" title="mxnet.gluon.loss.KLDivLoss.save_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">save_parameters</span></code></a>(filename[, deduplicate])</p></td>
<td><p>Save parameters to file.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.KLDivLoss.setattr" title="mxnet.gluon.loss.KLDivLoss.setattr"><code class="xref py py-obj docutils literal notranslate"><span class="pre">setattr</span></code></a>(name, value)</p></td>
<td><p>Set an attribute to a new value for all Parameters.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.KLDivLoss.share_parameters" title="mxnet.gluon.loss.KLDivLoss.share_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">share_parameters</span></code></a>(shared)</p></td>
<td><p>Share parameters recursively inside the model.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.KLDivLoss.summary" title="mxnet.gluon.loss.KLDivLoss.summary"><code class="xref py py-obj docutils literal notranslate"><span class="pre">summary</span></code></a>(*inputs)</p></td>
<td><p>Print the summary of the model’s output and parameters.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.KLDivLoss.zero_grad" title="mxnet.gluon.loss.KLDivLoss.zero_grad"><code class="xref py py-obj docutils literal notranslate"><span class="pre">zero_grad</span></code></a>()</p></td>
<td><p>Sets all Parameters’ gradient buffer to 0.</p></td>
</tr>
</tbody>
</table>
<p><strong>Attributes</strong></p>
<table class="longtable docutils align-default">
<colgroup>
<col style="width: 10%" />
<col style="width: 90%" />
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.KLDivLoss.params" title="mxnet.gluon.loss.KLDivLoss.params"><code class="xref py py-obj docutils literal notranslate"><span class="pre">params</span></code></a></p></td>
<td><p>Returns this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code>’s parameter dictionary (does not include its children’s parameters).</p></td>
</tr>
</tbody>
</table>
<p>If <cite>from_logits</cite> is False, loss is defined as:</p>
<div class="math notranslate nohighlight">
\[ \begin{align}\begin{aligned}\DeclareMathOperator{softmax}{softmax}\\prob = \softmax({pred})\\L = \sum_i {label}_i * \big[\log({label}_i) - \log({prob}_i)\big]\end{aligned}\end{align} \]</div>
<p><cite>label</cite> and <cite>pred</cite> can have arbitrary shape as long as they have the same
number of elements.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>from_logits</strong> (bool, default is <cite>True</cite>) – Whether the input is log probability (usually from log_softmax) instead
of unnormalized numbers.</p></li>
<li><p><strong>axis</strong> (<em>int</em><em>, </em><em>default -1</em>) – The dimension along with to compute softmax. Only used when <cite>from_logits</cite>
is False.</p></li>
<li><p><strong>weight</strong> (<em>float</em><em> or </em><em>None</em>) – Global scalar weight for loss.</p></li>
<li><p><strong>batch_axis</strong> (<em>int</em><em>, </em><em>default 0</em>) – The axis that represents mini-batch.</p></li>
</ul>
</dd>
</dl>
<dl class="simple">
<dt>Inputs:</dt><dd><ul class="simple">
<li><p><strong>pred</strong>: prediction tensor with arbitrary shape. If <cite>from_logits</cite> is
True, <cite>pred</cite> should be log probabilities. Otherwise, it should be
unnormalized predictions, i.e. from a dense layer.</p></li>
<li><p><strong>label</strong>: truth tensor with values in range <cite>(0, 1)</cite>. Must have
the same size as <cite>pred</cite>.</p></li>
<li><p><strong>sample_weight</strong>: element-wise weighting tensor. Must be broadcastable
to the same shape as pred. For example, if pred has shape (64, 10)
and you want to weigh each sample in the batch separately,
sample_weight should have shape (64, 1).</p></li>
</ul>
</dd>
<dt>Outputs:</dt><dd><ul class="simple">
<li><p><strong>loss</strong>: loss tensor with shape (batch_size,). Dimenions other than
batch_axis are averaged out.</p></li>
</ul>
</dd>
</dl>
<p class="rubric">References</p>
<p><a class="reference external" href="https://en.wikipedia.org/wiki/Kullback-Leibler_divergence">Kullback-Leibler divergence</a></p>
<dl class="method">
<dt id="mxnet.gluon.loss.KLDivLoss.apply">
<code class="sig-name descname">apply</code><span class="sig-paren">(</span><em class="sig-param">fn</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.KLDivLoss.apply" title="Permalink to this definition"></a></dt>
<dd><p>Applies <code class="docutils literal notranslate"><span class="pre">fn</span></code> recursively to every child block as well as self.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>fn</strong> (<em>callable</em>) – Function to be applied to each submodule, of form <cite>fn(block)</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>this block</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.KLDivLoss.collect_params">
<code class="sig-name descname">collect_params</code><span class="sig-paren">(</span><em class="sig-param">select=None</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.KLDivLoss.collect_params" title="Permalink to this definition"></a></dt>
<dd><p>Returns a <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> containing this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and all of its
children’s Parameters(default), also can returns the select <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code>
which match some given regular expressions.</p>
<p>For example, collect the specified parameters in [‘conv1.weight’, ‘conv1.bias’, ‘fc.weight’,
‘fc.bias’]:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">collect_params</span><span class="p">(</span><span class="s1">&#39;conv1.weight|conv1.bias|fc.weight|fc.bias&#39;</span><span class="p">)</span>
</pre></div>
</div>
<p>or collect all parameters whose names end with ‘weight’ or ‘bias’, this can be done
using regular expressions:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">collect_params</span><span class="p">(</span><span class="s1">&#39;.*weight|.*bias&#39;</span><span class="p">)</span>
</pre></div>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>select</strong> (<em>str</em>) – regular expressions</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>The selected <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.KLDivLoss.export">
<code class="sig-name descname">export</code><span class="sig-paren">(</span><em class="sig-param">path</em>, <em class="sig-param">epoch=0</em>, <em class="sig-param">remove_amp_cast=True</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.KLDivLoss.export" title="Permalink to this definition"></a></dt>
<dd><p>Export HybridBlock to json format that can be loaded by
<cite>gluon.SymbolBlock.imports</cite> or the C++ interface.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>When there are only one input, it will have name <cite>data</cite>. When there
Are more than one inputs, they will be named as <cite>data0</cite>, <cite>data1</cite>, etc.</p>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>path</strong> (<em>str</em><em> or </em><em>None</em>) – Path to save model. Two files <cite>path-symbol.json</cite> and <cite>path-xxxx.params</cite>
will be created, where xxxx is the 4 digits epoch number.
If None, do not export to file but return Python Symbol object and
corresponding dictionary of parameters.</p></li>
<li><p><strong>epoch</strong> (<em>int</em>) – Epoch number of saved model.</p></li>
<li><p><strong>remove_amp_cast</strong> (<em>bool</em><em>, </em><em>optional</em>) – Whether to remove the amp_cast and amp_multicast operators, before saving the model.</p></li>
</ul>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p><ul class="simple">
<li><p><strong>symbol_filename</strong> (<em>str</em>) – Filename to which model symbols were saved, including <cite>path</cite> prefix.</p></li>
<li><p><strong>params_filename</strong> (<em>str</em>) – Filename to which model parameters were saved, including <cite>path</cite> prefix.</p></li>
</ul>
</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.KLDivLoss.forward">
<code class="sig-name descname">forward</code><span class="sig-paren">(</span><em class="sig-param">pred</em>, <em class="sig-param">label</em>, <em class="sig-param">sample_weight=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#KLDivLoss.forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.KLDivLoss.forward" title="Permalink to this definition"></a></dt>
<dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>x</strong> (<a class="reference internal" href="../../legacy/symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li>
<li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.KLDivLoss.hybridize">
<code class="sig-name descname">hybridize</code><span class="sig-paren">(</span><em class="sig-param">active=True</em>, <em class="sig-param">partition_if_dynamic=True</em>, <em class="sig-param">static_alloc=False</em>, <em class="sig-param">static_shape=False</em>, <em class="sig-param">inline_limit=2</em>, <em class="sig-param">forward_bulk_size=None</em>, <em class="sig-param">backward_bulk_size=None</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.KLDivLoss.hybridize" title="Permalink to this definition"></a></dt>
<dd><p>Activates or deactivates <code class="xref py py-class docutils literal notranslate"><span class="pre">HybridBlock</span></code> s recursively. Has no effect on
non-hybrid children.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>active</strong> (<em>bool</em><em>, </em><em>default True</em>) – Whether to turn hybrid on or off.</p></li>
<li><p><strong>partition_if_dynamic</strong> (<em>bool</em><em>, </em><em>default False</em>) – whether to partition the graph when dynamic shape op exists</p></li>
<li><p><strong>static_alloc</strong> (<em>bool</em><em>, </em><em>default False</em>) – Statically allocate memory to improve speed. Memory usage may increase.</p></li>
<li><p><strong>static_shape</strong> (<em>bool</em><em>, </em><em>default False</em>) – Optimize for invariant input shapes between iterations. Must also
set static_alloc to True. Change of input shapes is still allowed
but slower.</p></li>
<li><p><strong>inline_limit</strong> (<em>optional int</em><em>, </em><em>default 2</em>) – Maximum number of operators that can be inlined.</p></li>
<li><p><strong>forward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during forward pass.</p></li>
<li><p><strong>backward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during backward pass.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.KLDivLoss.infer_shape">
<code class="sig-name descname">infer_shape</code><span class="sig-paren">(</span><em class="sig-param">*args</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.KLDivLoss.infer_shape" title="Permalink to this definition"></a></dt>
<dd><p>Infers shape of Parameters from inputs.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.KLDivLoss.infer_type">
<code class="sig-name descname">infer_type</code><span class="sig-paren">(</span><em class="sig-param">*args</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.KLDivLoss.infer_type" title="Permalink to this definition"></a></dt>
<dd><p>Infers data type of Parameters from inputs.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.KLDivLoss.initialize">
<code class="sig-name descname">initialize</code><span class="sig-paren">(</span><em class="sig-param">init=&lt;mxnet.initializer.Uniform object&gt;</em>, <em class="sig-param">device=None</em>, <em class="sig-param">verbose=False</em>, <em class="sig-param">force_reinit=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.KLDivLoss.initialize" title="Permalink to this definition"></a></dt>
<dd><p>Initializes <code class="xref py py-class docutils literal notranslate"><span class="pre">Parameter</span></code> s of this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and its children.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>init</strong> (<a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Global default Initializer to be used when <code class="xref py py-meth docutils literal notranslate"><span class="pre">Parameter.init()</span></code> is <code class="docutils literal notranslate"><span class="pre">None</span></code>.
Otherwise, <code class="xref py py-meth docutils literal notranslate"><span class="pre">Parameter.init()</span></code> takes precedence.</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em> or </em><em>list of Device</em>) – Keeps a copy of Parameters on one or many device(s).</p></li>
<li><p><strong>verbose</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to verbosely print out details on initialization.</p></li>
<li><p><strong>force_reinit</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to force re-initialization if parameter is already initialized.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.KLDivLoss.load">
<code class="sig-name descname">load</code><span class="sig-paren">(</span><em class="sig-param">prefix</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.KLDivLoss.load" title="Permalink to this definition"></a></dt>
<dd><p>Load a model saved using the <cite>save</cite> API</p>
<p>Reconfigures a model using the saved configuration. This function
does not regenerate the model architecture. It resets each Block’s
parameter UUIDs as they were when saved in order to match the names of the
saved parameters.</p>
<p>This function assumes the Blocks in the model were created in the same
order they were when the model was saved. This is because each Block is
uniquely identified by Block class name and a unique ID in order (since
its an OrderedDict) and uses the unique ID to denote that specific Block.</p>
<p>Assumes that the model is created in an identical order every time.
If the model is not able to be recreated deterministically do not
use this set of APIs to save/load your model.</p>
<p>For HybridBlocks, the cached_graph (Symbol &amp; inputs) and settings are
restored if it had been hybridized before saving.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>prefix</strong> (<em>str</em>) – The prefix to use in filenames for loading this model:
&lt;prefix&gt;-model.json and &lt;prefix&gt;-model.params</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.KLDivLoss.load_dict">
<code class="sig-name descname">load_dict</code><span class="sig-paren">(</span><em class="sig-param">param_dict</em>, <em class="sig-param">device=None</em>, <em class="sig-param">allow_missing=False</em>, <em class="sig-param">ignore_extra=False</em>, <em class="sig-param">cast_dtype=False</em>, <em class="sig-param">dtype_source='current'</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.KLDivLoss.load_dict" title="Permalink to this definition"></a></dt>
<dd><p>Load parameters from dict</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>param_dict</strong> (<em>dict</em>) – Dictionary containing model parameters</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em>, </em><em>optional</em>) – Device context on which the memory is allocated. Default is
<cite>mxnet.device.current_device()</cite>.</p></li>
<li><p><strong>allow_missing</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently skip loading parameters not represented in the file.</p></li>
<li><p><strong>ignore_extra</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently ignore parameters from the file that are not
present in this dict.</p></li>
<li><p><strong>cast_dtype</strong> (<em>bool</em><em>, </em><em>default False</em>) – Cast the data type of the NDArray loaded from the checkpoint to the dtype
provided by the Parameter if any</p></li>
<li><p><strong>dtype_source</strong> (<em>str</em><em>, </em><em>default 'current'</em>) – must be in {‘current’, ‘saved’}
Only valid if cast_dtype=True, specify the source of the dtype for casting
the parameters</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.KLDivLoss.load_parameters">
<code class="sig-name descname">load_parameters</code><span class="sig-paren">(</span><em class="sig-param">filename</em>, <em class="sig-param">device=None</em>, <em class="sig-param">allow_missing=False</em>, <em class="sig-param">ignore_extra=False</em>, <em class="sig-param">cast_dtype=False</em>, <em class="sig-param">dtype_source='current'</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.KLDivLoss.load_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Load parameters from file previously saved by <cite>save_parameters</cite>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>filename</strong> (<em>str</em>) – Path to parameter file.</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em> or </em><em>list of Device</em><em>, </em><em>default cpu</em><em>(</em><em>)</em>) – Device(s) to initialize loaded parameters on.</p></li>
<li><p><strong>allow_missing</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently skip loading parameters not represents in the file.</p></li>
<li><p><strong>ignore_extra</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently ignore parameters from the file that are not
present in this Block.</p></li>
<li><p><strong>cast_dtype</strong> (<em>bool</em><em>, </em><em>default False</em>) – Cast the data type of the NDArray loaded from the checkpoint to the dtype
provided by the Parameter if any.</p></li>
<li><p><strong>dtype_source</strong> (<em>str</em><em>, </em><em>default 'current'</em>) – must be in {‘current’, ‘saved’}
Only valid if cast_dtype=True, specify the source of the dtype for casting
the parameters</p></li>
</ul>
</dd>
</dl>
<p class="rubric">References</p>
<p><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.KLDivLoss.optimize_for">
<code class="sig-name descname">optimize_for</code><span class="sig-paren">(</span><em class="sig-param">x</em>, <em class="sig-param">*args</em>, <em class="sig-param">backend=None</em>, <em class="sig-param">clear=False</em>, <em class="sig-param">partition_if_dynamic=True</em>, <em class="sig-param">static_alloc=False</em>, <em class="sig-param">static_shape=False</em>, <em class="sig-param">inline_limit=2</em>, <em class="sig-param">forward_bulk_size=None</em>, <em class="sig-param">backward_bulk_size=None</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.KLDivLoss.optimize_for" title="Permalink to this definition"></a></dt>
<dd><p>Partitions the current HybridBlock and optimizes it for a given backend
without executing a forward pass. Modifies the HybridBlock in-place.</p>
<p>Immediately partitions a HybridBlock using the specified backend. Combines
the work done in the hybridize API with part of the work done in the forward
pass without calling the CachedOp. Can be used in place of hybridize,
afterwards <cite>export</cite> can be called or inference can be run. See README.md in
example/extensions/lib_subgraph/README.md for more details.</p>
<p class="rubric">Examples</p>
<p># partition and then export to file
block.optimize_for(x, backend=’myPart’)
block.export(‘partitioned’)</p>
<p># partition and then run inference
block.optimize_for(x, backend=’myPart’)
block(x)</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>x</strong> (<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – first input to model</p></li>
<li><p><strong>*args</strong> (<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – other inputs to model</p></li>
<li><p><strong>backend</strong> (<em>str</em>) – The name of backend, as registered in <cite>SubgraphBackendRegistry</cite>, default None</p></li>
<li><p><strong>backend_opts</strong> (<em>dict of user-specified options to pass to the backend for partitioning</em><em>, </em><em>optional</em>) – Passed on to <cite>PrePartition</cite> and <cite>PostPartition</cite> functions of <cite>SubgraphProperty</cite></p></li>
<li><p><strong>clear</strong> (<em>bool</em><em>, </em><em>default False</em>) – clears any previous optimizations</p></li>
<li><p><strong>partition_if_dynamic</strong> (<em>bool</em><em>, </em><em>default False</em>) – whether to partition the graph when dynamic shape op exists</p></li>
<li><p><strong>static_alloc</strong> (<em>bool</em><em>, </em><em>default False</em>) – Statically allocate memory to improve speed. Memory usage may increase.</p></li>
<li><p><strong>static_shape</strong> (<em>bool</em><em>, </em><em>default False</em>) – Optimize for invariant input shapes between iterations. Must also
set static_alloc to True. Change of input shapes is still allowed
but slower.</p></li>
<li><p><strong>inline_limit</strong> (<em>optional int</em><em>, </em><em>default 2</em>) – Maximum number of operators that can be inlined.</p></li>
<li><p><strong>forward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during forward pass.</p></li>
<li><p><strong>backward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during backward pass.</p></li>
<li><p><strong>**kwargs</strong> (<em>The backend options</em><em>, </em><em>optional</em>) – Passed on to <cite>PrePartition</cite> and <cite>PostPartition</cite> functions of <cite>SubgraphProperty</cite></p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.KLDivLoss.params">
<em class="property">property </em><code class="sig-name descname">params</code><a class="headerlink" href="#mxnet.gluon.loss.KLDivLoss.params" title="Permalink to this definition"></a></dt>
<dd><p>Returns this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code>’s parameter dictionary (does not include its
children’s parameters).</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.KLDivLoss.register_forward_hook">
<code class="sig-name descname">register_forward_hook</code><span class="sig-paren">(</span><em class="sig-param">hook</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.KLDivLoss.register_forward_hook" title="Permalink to this definition"></a></dt>
<dd><p>Registers a forward hook on the block.</p>
<p>The hook function is called immediately after <a class="reference internal" href="#mxnet.gluon.loss.KLDivLoss.forward" title="mxnet.gluon.loss.KLDivLoss.forward"><code class="xref py py-func docutils literal notranslate"><span class="pre">forward()</span></code></a>.
It should not modify the input or output.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>hook</strong> (<em>callable</em>) – The forward hook function of form <cite>hook(block, input, output) -&gt; None</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.utils.HookHandle</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.KLDivLoss.register_forward_pre_hook">
<code class="sig-name descname">register_forward_pre_hook</code><span class="sig-paren">(</span><em class="sig-param">hook</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.KLDivLoss.register_forward_pre_hook" title="Permalink to this definition"></a></dt>
<dd><p>Registers a forward pre-hook on the block.</p>
<p>The hook function is called immediately before <a class="reference internal" href="#mxnet.gluon.loss.KLDivLoss.forward" title="mxnet.gluon.loss.KLDivLoss.forward"><code class="xref py py-func docutils literal notranslate"><span class="pre">forward()</span></code></a>.
It should not modify the input or output.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>hook</strong> (<em>callable</em>) – The forward hook function of form <cite>hook(block, input) -&gt; None</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.utils.HookHandle</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.KLDivLoss.register_op_hook">
<code class="sig-name descname">register_op_hook</code><span class="sig-paren">(</span><em class="sig-param">callback</em>, <em class="sig-param">monitor_all=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.KLDivLoss.register_op_hook" title="Permalink to this definition"></a></dt>
<dd><p>Install op hook for block recursively.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>callback</strong> (<em>function</em>) – Function called to inspect the values of the intermediate outputs
of blocks after hybridization. It takes 3 parameters:
name of the tensor being inspected (str)
name of the operator producing or consuming that tensor (str)
tensor being inspected (NDArray).</p></li>
<li><p><strong>monitor_all</strong> (<em>bool</em><em>, </em><em>default False</em>) – If True, monitor both input and output, otherwise monitor output only.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.KLDivLoss.reset_ctx">
<code class="sig-name descname">reset_ctx</code><span class="sig-paren">(</span><em class="sig-param">ctx</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.KLDivLoss.reset_ctx" title="Permalink to this definition"></a></dt>
<dd><p>This function has been deprecated. Please refer to <code class="docutils literal notranslate"><span class="pre">HybridBlock.reset_device</span></code>.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.KLDivLoss.reset_device">
<code class="sig-name descname">reset_device</code><span class="sig-paren">(</span><em class="sig-param">device</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.KLDivLoss.reset_device" title="Permalink to this definition"></a></dt>
<dd><p>Re-assign all Parameters to other devices. If the Block is hybridized, it will reset the _cached_op_args.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>device</strong> (Device or list of Device, default <code class="xref py py-meth docutils literal notranslate"><span class="pre">device.current_device()</span></code>.) – Assign Parameter to given device. If device is a list of Device, a
copy will be made for each device.</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.KLDivLoss.save">
<code class="sig-name descname">save</code><span class="sig-paren">(</span><em class="sig-param">prefix</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.KLDivLoss.save" title="Permalink to this definition"></a></dt>
<dd><p>Save the model architecture and parameters to load again later</p>
<p>Saves the model architecture as a nested dictionary where each Block
in the model is a dictionary and its children are sub-dictionaries.</p>
<p>Each Block is uniquely identified by Block class name and a unique ID.
We save each Block’s parameter UUID to restore later in order to match
the saved parameters.</p>
<p>Recursively traverses a Block’s children in order (since its an
OrderedDict) and uses the unique ID to denote that specific Block.</p>
<p>Assumes that the model is created in an identical order every time.
If the model is not able to be recreated deterministically do not
use this set of APIs to save/load your model.</p>
<p>For HybridBlocks, the cached_graph is saved (Symbol &amp; inputs) if
it has already been hybridized.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>prefix</strong> (<em>str</em>) – The prefix to use in filenames for saving this model:
&lt;prefix&gt;-model.json and &lt;prefix&gt;-model.params</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.KLDivLoss.save_parameters">
<code class="sig-name descname">save_parameters</code><span class="sig-paren">(</span><em class="sig-param">filename</em>, <em class="sig-param">deduplicate=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.KLDivLoss.save_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Save parameters to file.</p>
<p>Saved parameters can only be loaded with <cite>load_parameters</cite>. Note that this
method only saves parameters, not model structure. If you want to save
model structures, please use <code class="xref py py-meth docutils literal notranslate"><span class="pre">HybridBlock.export()</span></code>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>filename</strong> (<em>str</em>) – Path to file.</p></li>
<li><p><strong>deduplicate</strong> (<em>bool</em><em>, </em><em>default False</em>) – If True, save shared parameters only once. Otherwise, if a Block
contains multiple sub-blocks that share parameters, each of the
shared parameters will be separately saved for every sub-block.</p></li>
</ul>
</dd>
</dl>
<p class="rubric">References</p>
<p><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.KLDivLoss.setattr">
<code class="sig-name descname">setattr</code><span class="sig-paren">(</span><em class="sig-param">name</em>, <em class="sig-param">value</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.KLDivLoss.setattr" title="Permalink to this definition"></a></dt>
<dd><p>Set an attribute to a new value for all Parameters.</p>
<p>For example, set grad_req to null if you don’t need gradient w.r.t a
model’s Parameters:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">setattr</span><span class="p">(</span><span class="s1">&#39;grad_req&#39;</span><span class="p">,</span> <span class="s1">&#39;null&#39;</span><span class="p">)</span>
</pre></div>
</div>
<p>or change the learning rate multiplier:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">setattr</span><span class="p">(</span><span class="s1">&#39;lr_mult&#39;</span><span class="p">,</span> <span class="mf">0.5</span><span class="p">)</span>
</pre></div>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>name</strong> (<em>str</em>) – Name of the attribute.</p></li>
<li><p><strong>value</strong> (<em>valid type for attribute name</em>) – The new value for the attribute.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.KLDivLoss.share_parameters">
<code class="sig-name descname">share_parameters</code><span class="sig-paren">(</span><em class="sig-param">shared</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.KLDivLoss.share_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Share parameters recursively inside the model.</p>
<p>For example, if you want <code class="docutils literal notranslate"><span class="pre">dense1</span></code> to share <code class="docutils literal notranslate"><span class="pre">dense0</span></code>’s weights, you can do:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">dense0</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">)</span>
<span class="n">dense1</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">)</span>
<span class="n">dense1</span><span class="o">.</span><span class="n">share_parameters</span><span class="p">(</span><span class="n">dense0</span><span class="o">.</span><span class="n">collect_params</span><span class="p">())</span>
</pre></div>
</div>
<dl class="simple">
<dt>which equals to</dt><dd><p>dense1.weight = dense0.weight
dense1.bias = dense0.bias</p>
</dd>
</dl>
<p>Note that unlike the <cite>load_parameters</cite> or <cite>load_dict</cite> functions,
<cite>share_parameters</cite> results in the <cite>Parameter</cite> object being shared (or
tied) between the models, whereas <cite>load_parameters</cite> or <cite>load_dict</cite> only
set the value of the data dictionary of a model. If you call
<cite>load_parameters</cite> or <cite>load_dict</cite> after <cite>share_parameters</cite>, the loaded
value will be reflected in all networks that use the shared (or tied)
<cite>Parameter</cite> object.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>shared</strong> (<em>Dict</em>) – Dict of the shared parameters.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>this block</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.KLDivLoss.summary">
<code class="sig-name descname">summary</code><span class="sig-paren">(</span><em class="sig-param">*inputs</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.KLDivLoss.summary" title="Permalink to this definition"></a></dt>
<dd><p>Print the summary of the model’s output and parameters.</p>
<p>The network must have been initialized, and must not have been hybridized.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>inputs</strong> (<em>object</em>) – Any input that the model supports. For any tensor in the input, only
<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.ndarray.NDArray</span></code></a> is supported.</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.KLDivLoss.zero_grad">
<code class="sig-name descname">zero_grad</code><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.KLDivLoss.zero_grad" title="Permalink to this definition"></a></dt>
<dd><p>Sets all Parameters’ gradient buffer to 0.</p>
</dd></dl>
</dd></dl>
<dl class="class">
<dt id="mxnet.gluon.loss.CTCLoss">
<em class="property">class </em><code class="sig-name descname">CTCLoss</code><span class="sig-paren">(</span><em class="sig-param">layout='NTC'</em>, <em class="sig-param">label_layout='NT'</em>, <em class="sig-param">weight=None</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#CTCLoss"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.CTCLoss" title="Permalink to this definition"></a></dt>
<dd><p>Bases: <a class="reference internal" href="#mxnet.gluon.loss.Loss" title="mxnet.gluon.loss.Loss"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.loss.Loss</span></code></a></p>
<p>Connectionist Temporal Classification Loss.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>layout</strong> (<em>str</em><em>, </em><em>default 'NTC'</em>) – Layout of prediction tensor. ‘N’, ‘T’, ‘C’ stands for batch size,
sequence length, and alphabet_size respectively.</p></li>
<li><p><strong>label_layout</strong> (<em>str</em><em>, </em><em>default 'NT'</em>) – Layout of the labels. ‘N’, ‘T’ stands for batch size, and sequence
length respectively.</p></li>
<li><p><strong>weight</strong> (<em>float</em><em> or </em><em>None</em>) – Global scalar weight for loss.</p></li>
</ul>
</dd>
</dl>
<p><strong>Methods</strong></p>
<table class="longtable docutils align-default">
<colgroup>
<col style="width: 10%" />
<col style="width: 90%" />
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CTCLoss.apply" title="mxnet.gluon.loss.CTCLoss.apply"><code class="xref py py-obj docutils literal notranslate"><span class="pre">apply</span></code></a>(fn)</p></td>
<td><p>Applies <code class="docutils literal notranslate"><span class="pre">fn</span></code> recursively to every child block as well as self.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CTCLoss.collect_params" title="mxnet.gluon.loss.CTCLoss.collect_params"><code class="xref py py-obj docutils literal notranslate"><span class="pre">collect_params</span></code></a>([select])</p></td>
<td><p>Returns a <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> containing this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and all of its children’s Parameters(default), also can returns the select <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> which match some given regular expressions.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CTCLoss.export" title="mxnet.gluon.loss.CTCLoss.export"><code class="xref py py-obj docutils literal notranslate"><span class="pre">export</span></code></a>(path[, epoch, remove_amp_cast])</p></td>
<td><p>Export HybridBlock to json format that can be loaded by <cite>gluon.SymbolBlock.imports</cite> or the C++ interface.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CTCLoss.forward" title="mxnet.gluon.loss.CTCLoss.forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">forward</span></code></a>(pred, label[, pred_lengths, …])</p></td>
<td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CTCLoss.hybridize" title="mxnet.gluon.loss.CTCLoss.hybridize"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybridize</span></code></a>([active, partition_if_dynamic, …])</p></td>
<td><p>Activates or deactivates <code class="xref py py-class docutils literal notranslate"><span class="pre">HybridBlock</span></code> s recursively.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CTCLoss.infer_shape" title="mxnet.gluon.loss.CTCLoss.infer_shape"><code class="xref py py-obj docutils literal notranslate"><span class="pre">infer_shape</span></code></a>(*args)</p></td>
<td><p>Infers shape of Parameters from inputs.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CTCLoss.infer_type" title="mxnet.gluon.loss.CTCLoss.infer_type"><code class="xref py py-obj docutils literal notranslate"><span class="pre">infer_type</span></code></a>(*args)</p></td>
<td><p>Infers data type of Parameters from inputs.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CTCLoss.initialize" title="mxnet.gluon.loss.CTCLoss.initialize"><code class="xref py py-obj docutils literal notranslate"><span class="pre">initialize</span></code></a>([init, device, verbose, force_reinit])</p></td>
<td><p>Initializes <code class="xref py py-class docutils literal notranslate"><span class="pre">Parameter</span></code> s of this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and its children.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CTCLoss.load" title="mxnet.gluon.loss.CTCLoss.load"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load</span></code></a>(prefix)</p></td>
<td><p>Load a model saved using the <cite>save</cite> API</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CTCLoss.load_dict" title="mxnet.gluon.loss.CTCLoss.load_dict"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load_dict</span></code></a>(param_dict[, device, …])</p></td>
<td><p>Load parameters from dict</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CTCLoss.load_parameters" title="mxnet.gluon.loss.CTCLoss.load_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load_parameters</span></code></a>(filename[, device, …])</p></td>
<td><p>Load parameters from file previously saved by <cite>save_parameters</cite>.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CTCLoss.optimize_for" title="mxnet.gluon.loss.CTCLoss.optimize_for"><code class="xref py py-obj docutils literal notranslate"><span class="pre">optimize_for</span></code></a>(x, *args[, backend, clear, …])</p></td>
<td><p>Partitions the current HybridBlock and optimizes it for a given backend without executing a forward pass.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CTCLoss.register_forward_hook" title="mxnet.gluon.loss.CTCLoss.register_forward_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_forward_hook</span></code></a>(hook)</p></td>
<td><p>Registers a forward hook on the block.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CTCLoss.register_forward_pre_hook" title="mxnet.gluon.loss.CTCLoss.register_forward_pre_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_forward_pre_hook</span></code></a>(hook)</p></td>
<td><p>Registers a forward pre-hook on the block.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CTCLoss.register_op_hook" title="mxnet.gluon.loss.CTCLoss.register_op_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_op_hook</span></code></a>(callback[, monitor_all])</p></td>
<td><p>Install op hook for block recursively.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CTCLoss.reset_ctx" title="mxnet.gluon.loss.CTCLoss.reset_ctx"><code class="xref py py-obj docutils literal notranslate"><span class="pre">reset_ctx</span></code></a>(ctx)</p></td>
<td><p>This function has been deprecated.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CTCLoss.reset_device" title="mxnet.gluon.loss.CTCLoss.reset_device"><code class="xref py py-obj docutils literal notranslate"><span class="pre">reset_device</span></code></a>(device)</p></td>
<td><p>Re-assign all Parameters to other devices.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CTCLoss.save" title="mxnet.gluon.loss.CTCLoss.save"><code class="xref py py-obj docutils literal notranslate"><span class="pre">save</span></code></a>(prefix)</p></td>
<td><p>Save the model architecture and parameters to load again later</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CTCLoss.save_parameters" title="mxnet.gluon.loss.CTCLoss.save_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">save_parameters</span></code></a>(filename[, deduplicate])</p></td>
<td><p>Save parameters to file.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CTCLoss.setattr" title="mxnet.gluon.loss.CTCLoss.setattr"><code class="xref py py-obj docutils literal notranslate"><span class="pre">setattr</span></code></a>(name, value)</p></td>
<td><p>Set an attribute to a new value for all Parameters.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CTCLoss.share_parameters" title="mxnet.gluon.loss.CTCLoss.share_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">share_parameters</span></code></a>(shared)</p></td>
<td><p>Share parameters recursively inside the model.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CTCLoss.summary" title="mxnet.gluon.loss.CTCLoss.summary"><code class="xref py py-obj docutils literal notranslate"><span class="pre">summary</span></code></a>(*inputs)</p></td>
<td><p>Print the summary of the model’s output and parameters.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CTCLoss.zero_grad" title="mxnet.gluon.loss.CTCLoss.zero_grad"><code class="xref py py-obj docutils literal notranslate"><span class="pre">zero_grad</span></code></a>()</p></td>
<td><p>Sets all Parameters’ gradient buffer to 0.</p></td>
</tr>
</tbody>
</table>
<p><strong>Attributes</strong></p>
<table class="longtable docutils align-default">
<colgroup>
<col style="width: 10%" />
<col style="width: 90%" />
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CTCLoss.params" title="mxnet.gluon.loss.CTCLoss.params"><code class="xref py py-obj docutils literal notranslate"><span class="pre">params</span></code></a></p></td>
<td><p>Returns this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code>’s parameter dictionary (does not include its children’s parameters).</p></td>
</tr>
</tbody>
</table>
<dl class="simple">
<dt>Inputs:</dt><dd><ul class="simple">
<li><p><strong>pred</strong>: unnormalized prediction tensor (before softmax).
Its shape depends on <cite>layout</cite>. If <cite>layout</cite> is ‘TNC’, pred
should have shape <cite>(sequence_length, batch_size, alphabet_size)</cite>.
Note that in the last dimension, index <cite>alphabet_size-1</cite> is reserved
for internal use as blank label. So <cite>alphabet_size</cite> is one plus the
actual alphabet size.</p></li>
<li><p><strong>label</strong>: zero-based label tensor. Its shape depends on <cite>label_layout</cite>.
If <cite>label_layout</cite> is ‘TN’, <cite>label</cite> should have shape
<cite>(label_sequence_length, batch_size)</cite>.</p></li>
<li><p><strong>pred_lengths</strong>: optional (default None), used for specifying the
length of each entry when different <cite>pred</cite> entries in the same batch
have different lengths. <cite>pred_lengths</cite> should have shape <cite>(batch_size,)</cite>.</p></li>
<li><p><strong>label_lengths</strong>: optional (default None), used for specifying the
length of each entry when different <cite>label</cite> entries in the same batch
have different lengths. <cite>label_lengths</cite> should have shape <cite>(batch_size,)</cite>.</p></li>
</ul>
</dd>
<dt>Outputs:</dt><dd><ul class="simple">
<li><p><strong>loss</strong>: output loss has shape <cite>(batch_size,)</cite>.</p></li>
</ul>
</dd>
</dl>
<p><strong>Example</strong>: suppose the vocabulary is <cite>[a, b, c]</cite>, and in one batch we
have three sequences ‘ba’, ‘cbb’, and ‘abac’. We can index the labels as
<cite>{‘a’: 0, ‘b’: 1, ‘c’: 2, blank: 3}</cite>. Then <cite>alphabet_size</cite> should be 4,
where label 3 is reserved for internal use by <cite>CTCLoss</cite>. We then need to
pad each sequence with <cite>-1</cite> to make a rectangular <cite>label</cite> tensor:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="p">[[</span><span class="mi">1</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="o">-</span><span class="mi">1</span><span class="p">,</span> <span class="o">-</span><span class="mi">1</span><span class="p">],</span>
<span class="p">[</span><span class="mi">2</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="o">-</span><span class="mi">1</span><span class="p">],</span>
<span class="p">[</span><span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">2</span><span class="p">]]</span>
</pre></div>
</div>
<p class="rubric">References</p>
<p><a class="reference external" href="http://www.cs.toronto.edu/~graves/icml_2006.pdf">Connectionist Temporal Classification: Labelling Unsegmented
Sequence Data with Recurrent Neural Networks</a></p>
<dl class="method">
<dt id="mxnet.gluon.loss.CTCLoss.apply">
<code class="sig-name descname">apply</code><span class="sig-paren">(</span><em class="sig-param">fn</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CTCLoss.apply" title="Permalink to this definition"></a></dt>
<dd><p>Applies <code class="docutils literal notranslate"><span class="pre">fn</span></code> recursively to every child block as well as self.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>fn</strong> (<em>callable</em>) – Function to be applied to each submodule, of form <cite>fn(block)</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>this block</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CTCLoss.collect_params">
<code class="sig-name descname">collect_params</code><span class="sig-paren">(</span><em class="sig-param">select=None</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CTCLoss.collect_params" title="Permalink to this definition"></a></dt>
<dd><p>Returns a <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> containing this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and all of its
children’s Parameters(default), also can returns the select <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code>
which match some given regular expressions.</p>
<p>For example, collect the specified parameters in [‘conv1.weight’, ‘conv1.bias’, ‘fc.weight’,
‘fc.bias’]:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">collect_params</span><span class="p">(</span><span class="s1">&#39;conv1.weight|conv1.bias|fc.weight|fc.bias&#39;</span><span class="p">)</span>
</pre></div>
</div>
<p>or collect all parameters whose names end with ‘weight’ or ‘bias’, this can be done
using regular expressions:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">collect_params</span><span class="p">(</span><span class="s1">&#39;.*weight|.*bias&#39;</span><span class="p">)</span>
</pre></div>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>select</strong> (<em>str</em>) – regular expressions</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>The selected <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CTCLoss.export">
<code class="sig-name descname">export</code><span class="sig-paren">(</span><em class="sig-param">path</em>, <em class="sig-param">epoch=0</em>, <em class="sig-param">remove_amp_cast=True</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CTCLoss.export" title="Permalink to this definition"></a></dt>
<dd><p>Export HybridBlock to json format that can be loaded by
<cite>gluon.SymbolBlock.imports</cite> or the C++ interface.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>When there are only one input, it will have name <cite>data</cite>. When there
Are more than one inputs, they will be named as <cite>data0</cite>, <cite>data1</cite>, etc.</p>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>path</strong> (<em>str</em><em> or </em><em>None</em>) – Path to save model. Two files <cite>path-symbol.json</cite> and <cite>path-xxxx.params</cite>
will be created, where xxxx is the 4 digits epoch number.
If None, do not export to file but return Python Symbol object and
corresponding dictionary of parameters.</p></li>
<li><p><strong>epoch</strong> (<em>int</em>) – Epoch number of saved model.</p></li>
<li><p><strong>remove_amp_cast</strong> (<em>bool</em><em>, </em><em>optional</em>) – Whether to remove the amp_cast and amp_multicast operators, before saving the model.</p></li>
</ul>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p><ul class="simple">
<li><p><strong>symbol_filename</strong> (<em>str</em>) – Filename to which model symbols were saved, including <cite>path</cite> prefix.</p></li>
<li><p><strong>params_filename</strong> (<em>str</em>) – Filename to which model parameters were saved, including <cite>path</cite> prefix.</p></li>
</ul>
</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CTCLoss.forward">
<code class="sig-name descname">forward</code><span class="sig-paren">(</span><em class="sig-param">pred</em>, <em class="sig-param">label</em>, <em class="sig-param">pred_lengths=None</em>, <em class="sig-param">label_lengths=None</em>, <em class="sig-param">sample_weight=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#CTCLoss.forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.CTCLoss.forward" title="Permalink to this definition"></a></dt>
<dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>x</strong> (<a class="reference internal" href="../../legacy/symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li>
<li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CTCLoss.hybridize">
<code class="sig-name descname">hybridize</code><span class="sig-paren">(</span><em class="sig-param">active=True</em>, <em class="sig-param">partition_if_dynamic=True</em>, <em class="sig-param">static_alloc=False</em>, <em class="sig-param">static_shape=False</em>, <em class="sig-param">inline_limit=2</em>, <em class="sig-param">forward_bulk_size=None</em>, <em class="sig-param">backward_bulk_size=None</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CTCLoss.hybridize" title="Permalink to this definition"></a></dt>
<dd><p>Activates or deactivates <code class="xref py py-class docutils literal notranslate"><span class="pre">HybridBlock</span></code> s recursively. Has no effect on
non-hybrid children.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>active</strong> (<em>bool</em><em>, </em><em>default True</em>) – Whether to turn hybrid on or off.</p></li>
<li><p><strong>partition_if_dynamic</strong> (<em>bool</em><em>, </em><em>default False</em>) – whether to partition the graph when dynamic shape op exists</p></li>
<li><p><strong>static_alloc</strong> (<em>bool</em><em>, </em><em>default False</em>) – Statically allocate memory to improve speed. Memory usage may increase.</p></li>
<li><p><strong>static_shape</strong> (<em>bool</em><em>, </em><em>default False</em>) – Optimize for invariant input shapes between iterations. Must also
set static_alloc to True. Change of input shapes is still allowed
but slower.</p></li>
<li><p><strong>inline_limit</strong> (<em>optional int</em><em>, </em><em>default 2</em>) – Maximum number of operators that can be inlined.</p></li>
<li><p><strong>forward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during forward pass.</p></li>
<li><p><strong>backward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during backward pass.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CTCLoss.infer_shape">
<code class="sig-name descname">infer_shape</code><span class="sig-paren">(</span><em class="sig-param">*args</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CTCLoss.infer_shape" title="Permalink to this definition"></a></dt>
<dd><p>Infers shape of Parameters from inputs.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CTCLoss.infer_type">
<code class="sig-name descname">infer_type</code><span class="sig-paren">(</span><em class="sig-param">*args</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CTCLoss.infer_type" title="Permalink to this definition"></a></dt>
<dd><p>Infers data type of Parameters from inputs.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CTCLoss.initialize">
<code class="sig-name descname">initialize</code><span class="sig-paren">(</span><em class="sig-param">init=&lt;mxnet.initializer.Uniform object&gt;</em>, <em class="sig-param">device=None</em>, <em class="sig-param">verbose=False</em>, <em class="sig-param">force_reinit=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CTCLoss.initialize" title="Permalink to this definition"></a></dt>
<dd><p>Initializes <code class="xref py py-class docutils literal notranslate"><span class="pre">Parameter</span></code> s of this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and its children.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>init</strong> (<a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Global default Initializer to be used when <code class="xref py py-meth docutils literal notranslate"><span class="pre">Parameter.init()</span></code> is <code class="docutils literal notranslate"><span class="pre">None</span></code>.
Otherwise, <code class="xref py py-meth docutils literal notranslate"><span class="pre">Parameter.init()</span></code> takes precedence.</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em> or </em><em>list of Device</em>) – Keeps a copy of Parameters on one or many device(s).</p></li>
<li><p><strong>verbose</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to verbosely print out details on initialization.</p></li>
<li><p><strong>force_reinit</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to force re-initialization if parameter is already initialized.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CTCLoss.load">
<code class="sig-name descname">load</code><span class="sig-paren">(</span><em class="sig-param">prefix</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CTCLoss.load" title="Permalink to this definition"></a></dt>
<dd><p>Load a model saved using the <cite>save</cite> API</p>
<p>Reconfigures a model using the saved configuration. This function
does not regenerate the model architecture. It resets each Block’s
parameter UUIDs as they were when saved in order to match the names of the
saved parameters.</p>
<p>This function assumes the Blocks in the model were created in the same
order they were when the model was saved. This is because each Block is
uniquely identified by Block class name and a unique ID in order (since
its an OrderedDict) and uses the unique ID to denote that specific Block.</p>
<p>Assumes that the model is created in an identical order every time.
If the model is not able to be recreated deterministically do not
use this set of APIs to save/load your model.</p>
<p>For HybridBlocks, the cached_graph (Symbol &amp; inputs) and settings are
restored if it had been hybridized before saving.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>prefix</strong> (<em>str</em>) – The prefix to use in filenames for loading this model:
&lt;prefix&gt;-model.json and &lt;prefix&gt;-model.params</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CTCLoss.load_dict">
<code class="sig-name descname">load_dict</code><span class="sig-paren">(</span><em class="sig-param">param_dict</em>, <em class="sig-param">device=None</em>, <em class="sig-param">allow_missing=False</em>, <em class="sig-param">ignore_extra=False</em>, <em class="sig-param">cast_dtype=False</em>, <em class="sig-param">dtype_source='current'</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CTCLoss.load_dict" title="Permalink to this definition"></a></dt>
<dd><p>Load parameters from dict</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>param_dict</strong> (<em>dict</em>) – Dictionary containing model parameters</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em>, </em><em>optional</em>) – Device context on which the memory is allocated. Default is
<cite>mxnet.device.current_device()</cite>.</p></li>
<li><p><strong>allow_missing</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently skip loading parameters not represented in the file.</p></li>
<li><p><strong>ignore_extra</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently ignore parameters from the file that are not
present in this dict.</p></li>
<li><p><strong>cast_dtype</strong> (<em>bool</em><em>, </em><em>default False</em>) – Cast the data type of the NDArray loaded from the checkpoint to the dtype
provided by the Parameter if any</p></li>
<li><p><strong>dtype_source</strong> (<em>str</em><em>, </em><em>default 'current'</em>) – must be in {‘current’, ‘saved’}
Only valid if cast_dtype=True, specify the source of the dtype for casting
the parameters</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CTCLoss.load_parameters">
<code class="sig-name descname">load_parameters</code><span class="sig-paren">(</span><em class="sig-param">filename</em>, <em class="sig-param">device=None</em>, <em class="sig-param">allow_missing=False</em>, <em class="sig-param">ignore_extra=False</em>, <em class="sig-param">cast_dtype=False</em>, <em class="sig-param">dtype_source='current'</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CTCLoss.load_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Load parameters from file previously saved by <cite>save_parameters</cite>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>filename</strong> (<em>str</em>) – Path to parameter file.</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em> or </em><em>list of Device</em><em>, </em><em>default cpu</em><em>(</em><em>)</em>) – Device(s) to initialize loaded parameters on.</p></li>
<li><p><strong>allow_missing</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently skip loading parameters not represents in the file.</p></li>
<li><p><strong>ignore_extra</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently ignore parameters from the file that are not
present in this Block.</p></li>
<li><p><strong>cast_dtype</strong> (<em>bool</em><em>, </em><em>default False</em>) – Cast the data type of the NDArray loaded from the checkpoint to the dtype
provided by the Parameter if any.</p></li>
<li><p><strong>dtype_source</strong> (<em>str</em><em>, </em><em>default 'current'</em>) – must be in {‘current’, ‘saved’}
Only valid if cast_dtype=True, specify the source of the dtype for casting
the parameters</p></li>
</ul>
</dd>
</dl>
<p class="rubric">References</p>
<p><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CTCLoss.optimize_for">
<code class="sig-name descname">optimize_for</code><span class="sig-paren">(</span><em class="sig-param">x</em>, <em class="sig-param">*args</em>, <em class="sig-param">backend=None</em>, <em class="sig-param">clear=False</em>, <em class="sig-param">partition_if_dynamic=True</em>, <em class="sig-param">static_alloc=False</em>, <em class="sig-param">static_shape=False</em>, <em class="sig-param">inline_limit=2</em>, <em class="sig-param">forward_bulk_size=None</em>, <em class="sig-param">backward_bulk_size=None</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CTCLoss.optimize_for" title="Permalink to this definition"></a></dt>
<dd><p>Partitions the current HybridBlock and optimizes it for a given backend
without executing a forward pass. Modifies the HybridBlock in-place.</p>
<p>Immediately partitions a HybridBlock using the specified backend. Combines
the work done in the hybridize API with part of the work done in the forward
pass without calling the CachedOp. Can be used in place of hybridize,
afterwards <cite>export</cite> can be called or inference can be run. See README.md in
example/extensions/lib_subgraph/README.md for more details.</p>
<p class="rubric">Examples</p>
<p># partition and then export to file
block.optimize_for(x, backend=’myPart’)
block.export(‘partitioned’)</p>
<p># partition and then run inference
block.optimize_for(x, backend=’myPart’)
block(x)</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>x</strong> (<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – first input to model</p></li>
<li><p><strong>*args</strong> (<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – other inputs to model</p></li>
<li><p><strong>backend</strong> (<em>str</em>) – The name of backend, as registered in <cite>SubgraphBackendRegistry</cite>, default None</p></li>
<li><p><strong>backend_opts</strong> (<em>dict of user-specified options to pass to the backend for partitioning</em><em>, </em><em>optional</em>) – Passed on to <cite>PrePartition</cite> and <cite>PostPartition</cite> functions of <cite>SubgraphProperty</cite></p></li>
<li><p><strong>clear</strong> (<em>bool</em><em>, </em><em>default False</em>) – clears any previous optimizations</p></li>
<li><p><strong>partition_if_dynamic</strong> (<em>bool</em><em>, </em><em>default False</em>) – whether to partition the graph when dynamic shape op exists</p></li>
<li><p><strong>static_alloc</strong> (<em>bool</em><em>, </em><em>default False</em>) – Statically allocate memory to improve speed. Memory usage may increase.</p></li>
<li><p><strong>static_shape</strong> (<em>bool</em><em>, </em><em>default False</em>) – Optimize for invariant input shapes between iterations. Must also
set static_alloc to True. Change of input shapes is still allowed
but slower.</p></li>
<li><p><strong>inline_limit</strong> (<em>optional int</em><em>, </em><em>default 2</em>) – Maximum number of operators that can be inlined.</p></li>
<li><p><strong>forward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during forward pass.</p></li>
<li><p><strong>backward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during backward pass.</p></li>
<li><p><strong>**kwargs</strong> (<em>The backend options</em><em>, </em><em>optional</em>) – Passed on to <cite>PrePartition</cite> and <cite>PostPartition</cite> functions of <cite>SubgraphProperty</cite></p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CTCLoss.params">
<em class="property">property </em><code class="sig-name descname">params</code><a class="headerlink" href="#mxnet.gluon.loss.CTCLoss.params" title="Permalink to this definition"></a></dt>
<dd><p>Returns this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code>’s parameter dictionary (does not include its
children’s parameters).</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CTCLoss.register_forward_hook">
<code class="sig-name descname">register_forward_hook</code><span class="sig-paren">(</span><em class="sig-param">hook</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CTCLoss.register_forward_hook" title="Permalink to this definition"></a></dt>
<dd><p>Registers a forward hook on the block.</p>
<p>The hook function is called immediately after <a class="reference internal" href="#mxnet.gluon.loss.CTCLoss.forward" title="mxnet.gluon.loss.CTCLoss.forward"><code class="xref py py-func docutils literal notranslate"><span class="pre">forward()</span></code></a>.
It should not modify the input or output.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>hook</strong> (<em>callable</em>) – The forward hook function of form <cite>hook(block, input, output) -&gt; None</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.utils.HookHandle</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CTCLoss.register_forward_pre_hook">
<code class="sig-name descname">register_forward_pre_hook</code><span class="sig-paren">(</span><em class="sig-param">hook</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CTCLoss.register_forward_pre_hook" title="Permalink to this definition"></a></dt>
<dd><p>Registers a forward pre-hook on the block.</p>
<p>The hook function is called immediately before <a class="reference internal" href="#mxnet.gluon.loss.CTCLoss.forward" title="mxnet.gluon.loss.CTCLoss.forward"><code class="xref py py-func docutils literal notranslate"><span class="pre">forward()</span></code></a>.
It should not modify the input or output.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>hook</strong> (<em>callable</em>) – The forward hook function of form <cite>hook(block, input) -&gt; None</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.utils.HookHandle</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CTCLoss.register_op_hook">
<code class="sig-name descname">register_op_hook</code><span class="sig-paren">(</span><em class="sig-param">callback</em>, <em class="sig-param">monitor_all=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CTCLoss.register_op_hook" title="Permalink to this definition"></a></dt>
<dd><p>Install op hook for block recursively.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>callback</strong> (<em>function</em>) – Function called to inspect the values of the intermediate outputs
of blocks after hybridization. It takes 3 parameters:
name of the tensor being inspected (str)
name of the operator producing or consuming that tensor (str)
tensor being inspected (NDArray).</p></li>
<li><p><strong>monitor_all</strong> (<em>bool</em><em>, </em><em>default False</em>) – If True, monitor both input and output, otherwise monitor output only.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CTCLoss.reset_ctx">
<code class="sig-name descname">reset_ctx</code><span class="sig-paren">(</span><em class="sig-param">ctx</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CTCLoss.reset_ctx" title="Permalink to this definition"></a></dt>
<dd><p>This function has been deprecated. Please refer to <code class="docutils literal notranslate"><span class="pre">HybridBlock.reset_device</span></code>.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CTCLoss.reset_device">
<code class="sig-name descname">reset_device</code><span class="sig-paren">(</span><em class="sig-param">device</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CTCLoss.reset_device" title="Permalink to this definition"></a></dt>
<dd><p>Re-assign all Parameters to other devices. If the Block is hybridized, it will reset the _cached_op_args.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>device</strong> (Device or list of Device, default <code class="xref py py-meth docutils literal notranslate"><span class="pre">device.current_device()</span></code>.) – Assign Parameter to given device. If device is a list of Device, a
copy will be made for each device.</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CTCLoss.save">
<code class="sig-name descname">save</code><span class="sig-paren">(</span><em class="sig-param">prefix</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CTCLoss.save" title="Permalink to this definition"></a></dt>
<dd><p>Save the model architecture and parameters to load again later</p>
<p>Saves the model architecture as a nested dictionary where each Block
in the model is a dictionary and its children are sub-dictionaries.</p>
<p>Each Block is uniquely identified by Block class name and a unique ID.
We save each Block’s parameter UUID to restore later in order to match
the saved parameters.</p>
<p>Recursively traverses a Block’s children in order (since its an
OrderedDict) and uses the unique ID to denote that specific Block.</p>
<p>Assumes that the model is created in an identical order every time.
If the model is not able to be recreated deterministically do not
use this set of APIs to save/load your model.</p>
<p>For HybridBlocks, the cached_graph is saved (Symbol &amp; inputs) if
it has already been hybridized.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>prefix</strong> (<em>str</em>) – The prefix to use in filenames for saving this model:
&lt;prefix&gt;-model.json and &lt;prefix&gt;-model.params</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CTCLoss.save_parameters">
<code class="sig-name descname">save_parameters</code><span class="sig-paren">(</span><em class="sig-param">filename</em>, <em class="sig-param">deduplicate=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CTCLoss.save_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Save parameters to file.</p>
<p>Saved parameters can only be loaded with <cite>load_parameters</cite>. Note that this
method only saves parameters, not model structure. If you want to save
model structures, please use <code class="xref py py-meth docutils literal notranslate"><span class="pre">HybridBlock.export()</span></code>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>filename</strong> (<em>str</em>) – Path to file.</p></li>
<li><p><strong>deduplicate</strong> (<em>bool</em><em>, </em><em>default False</em>) – If True, save shared parameters only once. Otherwise, if a Block
contains multiple sub-blocks that share parameters, each of the
shared parameters will be separately saved for every sub-block.</p></li>
</ul>
</dd>
</dl>
<p class="rubric">References</p>
<p><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CTCLoss.setattr">
<code class="sig-name descname">setattr</code><span class="sig-paren">(</span><em class="sig-param">name</em>, <em class="sig-param">value</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CTCLoss.setattr" title="Permalink to this definition"></a></dt>
<dd><p>Set an attribute to a new value for all Parameters.</p>
<p>For example, set grad_req to null if you don’t need gradient w.r.t a
model’s Parameters:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">setattr</span><span class="p">(</span><span class="s1">&#39;grad_req&#39;</span><span class="p">,</span> <span class="s1">&#39;null&#39;</span><span class="p">)</span>
</pre></div>
</div>
<p>or change the learning rate multiplier:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">setattr</span><span class="p">(</span><span class="s1">&#39;lr_mult&#39;</span><span class="p">,</span> <span class="mf">0.5</span><span class="p">)</span>
</pre></div>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>name</strong> (<em>str</em>) – Name of the attribute.</p></li>
<li><p><strong>value</strong> (<em>valid type for attribute name</em>) – The new value for the attribute.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CTCLoss.share_parameters">
<code class="sig-name descname">share_parameters</code><span class="sig-paren">(</span><em class="sig-param">shared</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CTCLoss.share_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Share parameters recursively inside the model.</p>
<p>For example, if you want <code class="docutils literal notranslate"><span class="pre">dense1</span></code> to share <code class="docutils literal notranslate"><span class="pre">dense0</span></code>’s weights, you can do:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">dense0</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">)</span>
<span class="n">dense1</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">)</span>
<span class="n">dense1</span><span class="o">.</span><span class="n">share_parameters</span><span class="p">(</span><span class="n">dense0</span><span class="o">.</span><span class="n">collect_params</span><span class="p">())</span>
</pre></div>
</div>
<dl class="simple">
<dt>which equals to</dt><dd><p>dense1.weight = dense0.weight
dense1.bias = dense0.bias</p>
</dd>
</dl>
<p>Note that unlike the <cite>load_parameters</cite> or <cite>load_dict</cite> functions,
<cite>share_parameters</cite> results in the <cite>Parameter</cite> object being shared (or
tied) between the models, whereas <cite>load_parameters</cite> or <cite>load_dict</cite> only
set the value of the data dictionary of a model. If you call
<cite>load_parameters</cite> or <cite>load_dict</cite> after <cite>share_parameters</cite>, the loaded
value will be reflected in all networks that use the shared (or tied)
<cite>Parameter</cite> object.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>shared</strong> (<em>Dict</em>) – Dict of the shared parameters.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>this block</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CTCLoss.summary">
<code class="sig-name descname">summary</code><span class="sig-paren">(</span><em class="sig-param">*inputs</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CTCLoss.summary" title="Permalink to this definition"></a></dt>
<dd><p>Print the summary of the model’s output and parameters.</p>
<p>The network must have been initialized, and must not have been hybridized.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>inputs</strong> (<em>object</em>) – Any input that the model supports. For any tensor in the input, only
<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.ndarray.NDArray</span></code></a> is supported.</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CTCLoss.zero_grad">
<code class="sig-name descname">zero_grad</code><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CTCLoss.zero_grad" title="Permalink to this definition"></a></dt>
<dd><p>Sets all Parameters’ gradient buffer to 0.</p>
</dd></dl>
</dd></dl>
<dl class="class">
<dt id="mxnet.gluon.loss.HuberLoss">
<em class="property">class </em><code class="sig-name descname">HuberLoss</code><span class="sig-paren">(</span><em class="sig-param">rho=1</em>, <em class="sig-param">weight=None</em>, <em class="sig-param">batch_axis=0</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#HuberLoss"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.HuberLoss" title="Permalink to this definition"></a></dt>
<dd><p>Bases: <a class="reference internal" href="#mxnet.gluon.loss.Loss" title="mxnet.gluon.loss.Loss"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.loss.Loss</span></code></a></p>
<p>Calculates smoothed L1 loss that is equal to L1 loss if absolute error
exceeds rho but is equal to L2 loss otherwise. Also called SmoothedL1 loss.</p>
<div class="math notranslate nohighlight">
\[\begin{split}L = \sum_i \begin{cases} \frac{1}{2 {rho}} ({label}_i - {pred}_i)^2 &amp;
\text{ if } |{label}_i - {pred}_i| &lt; {rho} \\
|{label}_i - {pred}_i| - \frac{{rho}}{2} &amp;
\text{ otherwise }
\end{cases}\end{split}\]</div>
<p><strong>Methods</strong></p>
<table class="longtable docutils align-default">
<colgroup>
<col style="width: 10%" />
<col style="width: 90%" />
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HuberLoss.apply" title="mxnet.gluon.loss.HuberLoss.apply"><code class="xref py py-obj docutils literal notranslate"><span class="pre">apply</span></code></a>(fn)</p></td>
<td><p>Applies <code class="docutils literal notranslate"><span class="pre">fn</span></code> recursively to every child block as well as self.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HuberLoss.collect_params" title="mxnet.gluon.loss.HuberLoss.collect_params"><code class="xref py py-obj docutils literal notranslate"><span class="pre">collect_params</span></code></a>([select])</p></td>
<td><p>Returns a <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> containing this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and all of its children’s Parameters(default), also can returns the select <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> which match some given regular expressions.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HuberLoss.export" title="mxnet.gluon.loss.HuberLoss.export"><code class="xref py py-obj docutils literal notranslate"><span class="pre">export</span></code></a>(path[, epoch, remove_amp_cast])</p></td>
<td><p>Export HybridBlock to json format that can be loaded by <cite>gluon.SymbolBlock.imports</cite> or the C++ interface.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HuberLoss.forward" title="mxnet.gluon.loss.HuberLoss.forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">forward</span></code></a>(pred, label[, sample_weight])</p></td>
<td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HuberLoss.hybridize" title="mxnet.gluon.loss.HuberLoss.hybridize"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybridize</span></code></a>([active, partition_if_dynamic, …])</p></td>
<td><p>Activates or deactivates <code class="xref py py-class docutils literal notranslate"><span class="pre">HybridBlock</span></code> s recursively.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HuberLoss.infer_shape" title="mxnet.gluon.loss.HuberLoss.infer_shape"><code class="xref py py-obj docutils literal notranslate"><span class="pre">infer_shape</span></code></a>(*args)</p></td>
<td><p>Infers shape of Parameters from inputs.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HuberLoss.infer_type" title="mxnet.gluon.loss.HuberLoss.infer_type"><code class="xref py py-obj docutils literal notranslate"><span class="pre">infer_type</span></code></a>(*args)</p></td>
<td><p>Infers data type of Parameters from inputs.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HuberLoss.initialize" title="mxnet.gluon.loss.HuberLoss.initialize"><code class="xref py py-obj docutils literal notranslate"><span class="pre">initialize</span></code></a>([init, device, verbose, force_reinit])</p></td>
<td><p>Initializes <code class="xref py py-class docutils literal notranslate"><span class="pre">Parameter</span></code> s of this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and its children.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HuberLoss.load" title="mxnet.gluon.loss.HuberLoss.load"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load</span></code></a>(prefix)</p></td>
<td><p>Load a model saved using the <cite>save</cite> API</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HuberLoss.load_dict" title="mxnet.gluon.loss.HuberLoss.load_dict"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load_dict</span></code></a>(param_dict[, device, …])</p></td>
<td><p>Load parameters from dict</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HuberLoss.load_parameters" title="mxnet.gluon.loss.HuberLoss.load_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load_parameters</span></code></a>(filename[, device, …])</p></td>
<td><p>Load parameters from file previously saved by <cite>save_parameters</cite>.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HuberLoss.optimize_for" title="mxnet.gluon.loss.HuberLoss.optimize_for"><code class="xref py py-obj docutils literal notranslate"><span class="pre">optimize_for</span></code></a>(x, *args[, backend, clear, …])</p></td>
<td><p>Partitions the current HybridBlock and optimizes it for a given backend without executing a forward pass.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HuberLoss.register_forward_hook" title="mxnet.gluon.loss.HuberLoss.register_forward_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_forward_hook</span></code></a>(hook)</p></td>
<td><p>Registers a forward hook on the block.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HuberLoss.register_forward_pre_hook" title="mxnet.gluon.loss.HuberLoss.register_forward_pre_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_forward_pre_hook</span></code></a>(hook)</p></td>
<td><p>Registers a forward pre-hook on the block.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HuberLoss.register_op_hook" title="mxnet.gluon.loss.HuberLoss.register_op_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_op_hook</span></code></a>(callback[, monitor_all])</p></td>
<td><p>Install op hook for block recursively.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HuberLoss.reset_ctx" title="mxnet.gluon.loss.HuberLoss.reset_ctx"><code class="xref py py-obj docutils literal notranslate"><span class="pre">reset_ctx</span></code></a>(ctx)</p></td>
<td><p>This function has been deprecated.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HuberLoss.reset_device" title="mxnet.gluon.loss.HuberLoss.reset_device"><code class="xref py py-obj docutils literal notranslate"><span class="pre">reset_device</span></code></a>(device)</p></td>
<td><p>Re-assign all Parameters to other devices.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HuberLoss.save" title="mxnet.gluon.loss.HuberLoss.save"><code class="xref py py-obj docutils literal notranslate"><span class="pre">save</span></code></a>(prefix)</p></td>
<td><p>Save the model architecture and parameters to load again later</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HuberLoss.save_parameters" title="mxnet.gluon.loss.HuberLoss.save_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">save_parameters</span></code></a>(filename[, deduplicate])</p></td>
<td><p>Save parameters to file.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HuberLoss.setattr" title="mxnet.gluon.loss.HuberLoss.setattr"><code class="xref py py-obj docutils literal notranslate"><span class="pre">setattr</span></code></a>(name, value)</p></td>
<td><p>Set an attribute to a new value for all Parameters.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HuberLoss.share_parameters" title="mxnet.gluon.loss.HuberLoss.share_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">share_parameters</span></code></a>(shared)</p></td>
<td><p>Share parameters recursively inside the model.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HuberLoss.summary" title="mxnet.gluon.loss.HuberLoss.summary"><code class="xref py py-obj docutils literal notranslate"><span class="pre">summary</span></code></a>(*inputs)</p></td>
<td><p>Print the summary of the model’s output and parameters.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HuberLoss.zero_grad" title="mxnet.gluon.loss.HuberLoss.zero_grad"><code class="xref py py-obj docutils literal notranslate"><span class="pre">zero_grad</span></code></a>()</p></td>
<td><p>Sets all Parameters’ gradient buffer to 0.</p></td>
</tr>
</tbody>
</table>
<p><strong>Attributes</strong></p>
<table class="longtable docutils align-default">
<colgroup>
<col style="width: 10%" />
<col style="width: 90%" />
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HuberLoss.params" title="mxnet.gluon.loss.HuberLoss.params"><code class="xref py py-obj docutils literal notranslate"><span class="pre">params</span></code></a></p></td>
<td><p>Returns this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code>’s parameter dictionary (does not include its children’s parameters).</p></td>
</tr>
</tbody>
</table>
<p><cite>label</cite> and <cite>pred</cite> can have arbitrary shape as long as they have the same
number of elements.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>rho</strong> (<em>float</em><em>, </em><em>default 1</em>) – Threshold for trimmed mean estimator.</p></li>
<li><p><strong>weight</strong> (<em>float</em><em> or </em><em>None</em>) – Global scalar weight for loss.</p></li>
<li><p><strong>batch_axis</strong> (<em>int</em><em>, </em><em>default 0</em>) – The axis that represents mini-batch.</p></li>
</ul>
</dd>
</dl>
<dl class="simple">
<dt>Inputs:</dt><dd><ul class="simple">
<li><p><strong>pred</strong>: prediction tensor with arbitrary shape</p></li>
<li><p><strong>label</strong>: target tensor with the same size as pred.</p></li>
<li><p><strong>sample_weight</strong>: element-wise weighting tensor. Must be broadcastable
to the same shape as pred. For example, if pred has shape (64, 10)
and you want to weigh each sample in the batch separately,
sample_weight should have shape (64, 1).</p></li>
</ul>
</dd>
<dt>Outputs:</dt><dd><ul class="simple">
<li><p><strong>loss</strong>: loss tensor with shape (batch_size,). Dimenions other than
batch_axis are averaged out.</p></li>
</ul>
</dd>
</dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HuberLoss.apply">
<code class="sig-name descname">apply</code><span class="sig-paren">(</span><em class="sig-param">fn</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HuberLoss.apply" title="Permalink to this definition"></a></dt>
<dd><p>Applies <code class="docutils literal notranslate"><span class="pre">fn</span></code> recursively to every child block as well as self.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>fn</strong> (<em>callable</em>) – Function to be applied to each submodule, of form <cite>fn(block)</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>this block</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HuberLoss.collect_params">
<code class="sig-name descname">collect_params</code><span class="sig-paren">(</span><em class="sig-param">select=None</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HuberLoss.collect_params" title="Permalink to this definition"></a></dt>
<dd><p>Returns a <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> containing this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and all of its
children’s Parameters(default), also can returns the select <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code>
which match some given regular expressions.</p>
<p>For example, collect the specified parameters in [‘conv1.weight’, ‘conv1.bias’, ‘fc.weight’,
‘fc.bias’]:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">collect_params</span><span class="p">(</span><span class="s1">&#39;conv1.weight|conv1.bias|fc.weight|fc.bias&#39;</span><span class="p">)</span>
</pre></div>
</div>
<p>or collect all parameters whose names end with ‘weight’ or ‘bias’, this can be done
using regular expressions:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">collect_params</span><span class="p">(</span><span class="s1">&#39;.*weight|.*bias&#39;</span><span class="p">)</span>
</pre></div>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>select</strong> (<em>str</em>) – regular expressions</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>The selected <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HuberLoss.export">
<code class="sig-name descname">export</code><span class="sig-paren">(</span><em class="sig-param">path</em>, <em class="sig-param">epoch=0</em>, <em class="sig-param">remove_amp_cast=True</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HuberLoss.export" title="Permalink to this definition"></a></dt>
<dd><p>Export HybridBlock to json format that can be loaded by
<cite>gluon.SymbolBlock.imports</cite> or the C++ interface.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>When there are only one input, it will have name <cite>data</cite>. When there
Are more than one inputs, they will be named as <cite>data0</cite>, <cite>data1</cite>, etc.</p>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>path</strong> (<em>str</em><em> or </em><em>None</em>) – Path to save model. Two files <cite>path-symbol.json</cite> and <cite>path-xxxx.params</cite>
will be created, where xxxx is the 4 digits epoch number.
If None, do not export to file but return Python Symbol object and
corresponding dictionary of parameters.</p></li>
<li><p><strong>epoch</strong> (<em>int</em>) – Epoch number of saved model.</p></li>
<li><p><strong>remove_amp_cast</strong> (<em>bool</em><em>, </em><em>optional</em>) – Whether to remove the amp_cast and amp_multicast operators, before saving the model.</p></li>
</ul>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p><ul class="simple">
<li><p><strong>symbol_filename</strong> (<em>str</em>) – Filename to which model symbols were saved, including <cite>path</cite> prefix.</p></li>
<li><p><strong>params_filename</strong> (<em>str</em>) – Filename to which model parameters were saved, including <cite>path</cite> prefix.</p></li>
</ul>
</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HuberLoss.forward">
<code class="sig-name descname">forward</code><span class="sig-paren">(</span><em class="sig-param">pred</em>, <em class="sig-param">label</em>, <em class="sig-param">sample_weight=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#HuberLoss.forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.HuberLoss.forward" title="Permalink to this definition"></a></dt>
<dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>x</strong> (<a class="reference internal" href="../../legacy/symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li>
<li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HuberLoss.hybridize">
<code class="sig-name descname">hybridize</code><span class="sig-paren">(</span><em class="sig-param">active=True</em>, <em class="sig-param">partition_if_dynamic=True</em>, <em class="sig-param">static_alloc=False</em>, <em class="sig-param">static_shape=False</em>, <em class="sig-param">inline_limit=2</em>, <em class="sig-param">forward_bulk_size=None</em>, <em class="sig-param">backward_bulk_size=None</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HuberLoss.hybridize" title="Permalink to this definition"></a></dt>
<dd><p>Activates or deactivates <code class="xref py py-class docutils literal notranslate"><span class="pre">HybridBlock</span></code> s recursively. Has no effect on
non-hybrid children.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>active</strong> (<em>bool</em><em>, </em><em>default True</em>) – Whether to turn hybrid on or off.</p></li>
<li><p><strong>partition_if_dynamic</strong> (<em>bool</em><em>, </em><em>default False</em>) – whether to partition the graph when dynamic shape op exists</p></li>
<li><p><strong>static_alloc</strong> (<em>bool</em><em>, </em><em>default False</em>) – Statically allocate memory to improve speed. Memory usage may increase.</p></li>
<li><p><strong>static_shape</strong> (<em>bool</em><em>, </em><em>default False</em>) – Optimize for invariant input shapes between iterations. Must also
set static_alloc to True. Change of input shapes is still allowed
but slower.</p></li>
<li><p><strong>inline_limit</strong> (<em>optional int</em><em>, </em><em>default 2</em>) – Maximum number of operators that can be inlined.</p></li>
<li><p><strong>forward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during forward pass.</p></li>
<li><p><strong>backward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during backward pass.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HuberLoss.infer_shape">
<code class="sig-name descname">infer_shape</code><span class="sig-paren">(</span><em class="sig-param">*args</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HuberLoss.infer_shape" title="Permalink to this definition"></a></dt>
<dd><p>Infers shape of Parameters from inputs.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HuberLoss.infer_type">
<code class="sig-name descname">infer_type</code><span class="sig-paren">(</span><em class="sig-param">*args</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HuberLoss.infer_type" title="Permalink to this definition"></a></dt>
<dd><p>Infers data type of Parameters from inputs.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HuberLoss.initialize">
<code class="sig-name descname">initialize</code><span class="sig-paren">(</span><em class="sig-param">init=&lt;mxnet.initializer.Uniform object&gt;</em>, <em class="sig-param">device=None</em>, <em class="sig-param">verbose=False</em>, <em class="sig-param">force_reinit=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HuberLoss.initialize" title="Permalink to this definition"></a></dt>
<dd><p>Initializes <code class="xref py py-class docutils literal notranslate"><span class="pre">Parameter</span></code> s of this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and its children.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>init</strong> (<a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Global default Initializer to be used when <code class="xref py py-meth docutils literal notranslate"><span class="pre">Parameter.init()</span></code> is <code class="docutils literal notranslate"><span class="pre">None</span></code>.
Otherwise, <code class="xref py py-meth docutils literal notranslate"><span class="pre">Parameter.init()</span></code> takes precedence.</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em> or </em><em>list of Device</em>) – Keeps a copy of Parameters on one or many device(s).</p></li>
<li><p><strong>verbose</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to verbosely print out details on initialization.</p></li>
<li><p><strong>force_reinit</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to force re-initialization if parameter is already initialized.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HuberLoss.load">
<code class="sig-name descname">load</code><span class="sig-paren">(</span><em class="sig-param">prefix</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HuberLoss.load" title="Permalink to this definition"></a></dt>
<dd><p>Load a model saved using the <cite>save</cite> API</p>
<p>Reconfigures a model using the saved configuration. This function
does not regenerate the model architecture. It resets each Block’s
parameter UUIDs as they were when saved in order to match the names of the
saved parameters.</p>
<p>This function assumes the Blocks in the model were created in the same
order they were when the model was saved. This is because each Block is
uniquely identified by Block class name and a unique ID in order (since
its an OrderedDict) and uses the unique ID to denote that specific Block.</p>
<p>Assumes that the model is created in an identical order every time.
If the model is not able to be recreated deterministically do not
use this set of APIs to save/load your model.</p>
<p>For HybridBlocks, the cached_graph (Symbol &amp; inputs) and settings are
restored if it had been hybridized before saving.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>prefix</strong> (<em>str</em>) – The prefix to use in filenames for loading this model:
&lt;prefix&gt;-model.json and &lt;prefix&gt;-model.params</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HuberLoss.load_dict">
<code class="sig-name descname">load_dict</code><span class="sig-paren">(</span><em class="sig-param">param_dict</em>, <em class="sig-param">device=None</em>, <em class="sig-param">allow_missing=False</em>, <em class="sig-param">ignore_extra=False</em>, <em class="sig-param">cast_dtype=False</em>, <em class="sig-param">dtype_source='current'</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HuberLoss.load_dict" title="Permalink to this definition"></a></dt>
<dd><p>Load parameters from dict</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>param_dict</strong> (<em>dict</em>) – Dictionary containing model parameters</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em>, </em><em>optional</em>) – Device context on which the memory is allocated. Default is
<cite>mxnet.device.current_device()</cite>.</p></li>
<li><p><strong>allow_missing</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently skip loading parameters not represented in the file.</p></li>
<li><p><strong>ignore_extra</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently ignore parameters from the file that are not
present in this dict.</p></li>
<li><p><strong>cast_dtype</strong> (<em>bool</em><em>, </em><em>default False</em>) – Cast the data type of the NDArray loaded from the checkpoint to the dtype
provided by the Parameter if any</p></li>
<li><p><strong>dtype_source</strong> (<em>str</em><em>, </em><em>default 'current'</em>) – must be in {‘current’, ‘saved’}
Only valid if cast_dtype=True, specify the source of the dtype for casting
the parameters</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HuberLoss.load_parameters">
<code class="sig-name descname">load_parameters</code><span class="sig-paren">(</span><em class="sig-param">filename</em>, <em class="sig-param">device=None</em>, <em class="sig-param">allow_missing=False</em>, <em class="sig-param">ignore_extra=False</em>, <em class="sig-param">cast_dtype=False</em>, <em class="sig-param">dtype_source='current'</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HuberLoss.load_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Load parameters from file previously saved by <cite>save_parameters</cite>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>filename</strong> (<em>str</em>) – Path to parameter file.</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em> or </em><em>list of Device</em><em>, </em><em>default cpu</em><em>(</em><em>)</em>) – Device(s) to initialize loaded parameters on.</p></li>
<li><p><strong>allow_missing</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently skip loading parameters not represents in the file.</p></li>
<li><p><strong>ignore_extra</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently ignore parameters from the file that are not
present in this Block.</p></li>
<li><p><strong>cast_dtype</strong> (<em>bool</em><em>, </em><em>default False</em>) – Cast the data type of the NDArray loaded from the checkpoint to the dtype
provided by the Parameter if any.</p></li>
<li><p><strong>dtype_source</strong> (<em>str</em><em>, </em><em>default 'current'</em>) – must be in {‘current’, ‘saved’}
Only valid if cast_dtype=True, specify the source of the dtype for casting
the parameters</p></li>
</ul>
</dd>
</dl>
<p class="rubric">References</p>
<p><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HuberLoss.optimize_for">
<code class="sig-name descname">optimize_for</code><span class="sig-paren">(</span><em class="sig-param">x</em>, <em class="sig-param">*args</em>, <em class="sig-param">backend=None</em>, <em class="sig-param">clear=False</em>, <em class="sig-param">partition_if_dynamic=True</em>, <em class="sig-param">static_alloc=False</em>, <em class="sig-param">static_shape=False</em>, <em class="sig-param">inline_limit=2</em>, <em class="sig-param">forward_bulk_size=None</em>, <em class="sig-param">backward_bulk_size=None</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HuberLoss.optimize_for" title="Permalink to this definition"></a></dt>
<dd><p>Partitions the current HybridBlock and optimizes it for a given backend
without executing a forward pass. Modifies the HybridBlock in-place.</p>
<p>Immediately partitions a HybridBlock using the specified backend. Combines
the work done in the hybridize API with part of the work done in the forward
pass without calling the CachedOp. Can be used in place of hybridize,
afterwards <cite>export</cite> can be called or inference can be run. See README.md in
example/extensions/lib_subgraph/README.md for more details.</p>
<p class="rubric">Examples</p>
<p># partition and then export to file
block.optimize_for(x, backend=’myPart’)
block.export(‘partitioned’)</p>
<p># partition and then run inference
block.optimize_for(x, backend=’myPart’)
block(x)</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>x</strong> (<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – first input to model</p></li>
<li><p><strong>*args</strong> (<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – other inputs to model</p></li>
<li><p><strong>backend</strong> (<em>str</em>) – The name of backend, as registered in <cite>SubgraphBackendRegistry</cite>, default None</p></li>
<li><p><strong>backend_opts</strong> (<em>dict of user-specified options to pass to the backend for partitioning</em><em>, </em><em>optional</em>) – Passed on to <cite>PrePartition</cite> and <cite>PostPartition</cite> functions of <cite>SubgraphProperty</cite></p></li>
<li><p><strong>clear</strong> (<em>bool</em><em>, </em><em>default False</em>) – clears any previous optimizations</p></li>
<li><p><strong>partition_if_dynamic</strong> (<em>bool</em><em>, </em><em>default False</em>) – whether to partition the graph when dynamic shape op exists</p></li>
<li><p><strong>static_alloc</strong> (<em>bool</em><em>, </em><em>default False</em>) – Statically allocate memory to improve speed. Memory usage may increase.</p></li>
<li><p><strong>static_shape</strong> (<em>bool</em><em>, </em><em>default False</em>) – Optimize for invariant input shapes between iterations. Must also
set static_alloc to True. Change of input shapes is still allowed
but slower.</p></li>
<li><p><strong>inline_limit</strong> (<em>optional int</em><em>, </em><em>default 2</em>) – Maximum number of operators that can be inlined.</p></li>
<li><p><strong>forward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during forward pass.</p></li>
<li><p><strong>backward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during backward pass.</p></li>
<li><p><strong>**kwargs</strong> (<em>The backend options</em><em>, </em><em>optional</em>) – Passed on to <cite>PrePartition</cite> and <cite>PostPartition</cite> functions of <cite>SubgraphProperty</cite></p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HuberLoss.params">
<em class="property">property </em><code class="sig-name descname">params</code><a class="headerlink" href="#mxnet.gluon.loss.HuberLoss.params" title="Permalink to this definition"></a></dt>
<dd><p>Returns this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code>’s parameter dictionary (does not include its
children’s parameters).</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HuberLoss.register_forward_hook">
<code class="sig-name descname">register_forward_hook</code><span class="sig-paren">(</span><em class="sig-param">hook</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HuberLoss.register_forward_hook" title="Permalink to this definition"></a></dt>
<dd><p>Registers a forward hook on the block.</p>
<p>The hook function is called immediately after <a class="reference internal" href="#mxnet.gluon.loss.HuberLoss.forward" title="mxnet.gluon.loss.HuberLoss.forward"><code class="xref py py-func docutils literal notranslate"><span class="pre">forward()</span></code></a>.
It should not modify the input or output.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>hook</strong> (<em>callable</em>) – The forward hook function of form <cite>hook(block, input, output) -&gt; None</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.utils.HookHandle</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HuberLoss.register_forward_pre_hook">
<code class="sig-name descname">register_forward_pre_hook</code><span class="sig-paren">(</span><em class="sig-param">hook</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HuberLoss.register_forward_pre_hook" title="Permalink to this definition"></a></dt>
<dd><p>Registers a forward pre-hook on the block.</p>
<p>The hook function is called immediately before <a class="reference internal" href="#mxnet.gluon.loss.HuberLoss.forward" title="mxnet.gluon.loss.HuberLoss.forward"><code class="xref py py-func docutils literal notranslate"><span class="pre">forward()</span></code></a>.
It should not modify the input or output.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>hook</strong> (<em>callable</em>) – The forward hook function of form <cite>hook(block, input) -&gt; None</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.utils.HookHandle</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HuberLoss.register_op_hook">
<code class="sig-name descname">register_op_hook</code><span class="sig-paren">(</span><em class="sig-param">callback</em>, <em class="sig-param">monitor_all=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HuberLoss.register_op_hook" title="Permalink to this definition"></a></dt>
<dd><p>Install op hook for block recursively.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>callback</strong> (<em>function</em>) – Function called to inspect the values of the intermediate outputs
of blocks after hybridization. It takes 3 parameters:
name of the tensor being inspected (str)
name of the operator producing or consuming that tensor (str)
tensor being inspected (NDArray).</p></li>
<li><p><strong>monitor_all</strong> (<em>bool</em><em>, </em><em>default False</em>) – If True, monitor both input and output, otherwise monitor output only.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HuberLoss.reset_ctx">
<code class="sig-name descname">reset_ctx</code><span class="sig-paren">(</span><em class="sig-param">ctx</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HuberLoss.reset_ctx" title="Permalink to this definition"></a></dt>
<dd><p>This function has been deprecated. Please refer to <code class="docutils literal notranslate"><span class="pre">HybridBlock.reset_device</span></code>.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HuberLoss.reset_device">
<code class="sig-name descname">reset_device</code><span class="sig-paren">(</span><em class="sig-param">device</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HuberLoss.reset_device" title="Permalink to this definition"></a></dt>
<dd><p>Re-assign all Parameters to other devices. If the Block is hybridized, it will reset the _cached_op_args.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>device</strong> (Device or list of Device, default <code class="xref py py-meth docutils literal notranslate"><span class="pre">device.current_device()</span></code>.) – Assign Parameter to given device. If device is a list of Device, a
copy will be made for each device.</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HuberLoss.save">
<code class="sig-name descname">save</code><span class="sig-paren">(</span><em class="sig-param">prefix</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HuberLoss.save" title="Permalink to this definition"></a></dt>
<dd><p>Save the model architecture and parameters to load again later</p>
<p>Saves the model architecture as a nested dictionary where each Block
in the model is a dictionary and its children are sub-dictionaries.</p>
<p>Each Block is uniquely identified by Block class name and a unique ID.
We save each Block’s parameter UUID to restore later in order to match
the saved parameters.</p>
<p>Recursively traverses a Block’s children in order (since its an
OrderedDict) and uses the unique ID to denote that specific Block.</p>
<p>Assumes that the model is created in an identical order every time.
If the model is not able to be recreated deterministically do not
use this set of APIs to save/load your model.</p>
<p>For HybridBlocks, the cached_graph is saved (Symbol &amp; inputs) if
it has already been hybridized.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>prefix</strong> (<em>str</em>) – The prefix to use in filenames for saving this model:
&lt;prefix&gt;-model.json and &lt;prefix&gt;-model.params</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HuberLoss.save_parameters">
<code class="sig-name descname">save_parameters</code><span class="sig-paren">(</span><em class="sig-param">filename</em>, <em class="sig-param">deduplicate=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HuberLoss.save_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Save parameters to file.</p>
<p>Saved parameters can only be loaded with <cite>load_parameters</cite>. Note that this
method only saves parameters, not model structure. If you want to save
model structures, please use <code class="xref py py-meth docutils literal notranslate"><span class="pre">HybridBlock.export()</span></code>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>filename</strong> (<em>str</em>) – Path to file.</p></li>
<li><p><strong>deduplicate</strong> (<em>bool</em><em>, </em><em>default False</em>) – If True, save shared parameters only once. Otherwise, if a Block
contains multiple sub-blocks that share parameters, each of the
shared parameters will be separately saved for every sub-block.</p></li>
</ul>
</dd>
</dl>
<p class="rubric">References</p>
<p><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HuberLoss.setattr">
<code class="sig-name descname">setattr</code><span class="sig-paren">(</span><em class="sig-param">name</em>, <em class="sig-param">value</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HuberLoss.setattr" title="Permalink to this definition"></a></dt>
<dd><p>Set an attribute to a new value for all Parameters.</p>
<p>For example, set grad_req to null if you don’t need gradient w.r.t a
model’s Parameters:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">setattr</span><span class="p">(</span><span class="s1">&#39;grad_req&#39;</span><span class="p">,</span> <span class="s1">&#39;null&#39;</span><span class="p">)</span>
</pre></div>
</div>
<p>or change the learning rate multiplier:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">setattr</span><span class="p">(</span><span class="s1">&#39;lr_mult&#39;</span><span class="p">,</span> <span class="mf">0.5</span><span class="p">)</span>
</pre></div>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>name</strong> (<em>str</em>) – Name of the attribute.</p></li>
<li><p><strong>value</strong> (<em>valid type for attribute name</em>) – The new value for the attribute.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HuberLoss.share_parameters">
<code class="sig-name descname">share_parameters</code><span class="sig-paren">(</span><em class="sig-param">shared</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HuberLoss.share_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Share parameters recursively inside the model.</p>
<p>For example, if you want <code class="docutils literal notranslate"><span class="pre">dense1</span></code> to share <code class="docutils literal notranslate"><span class="pre">dense0</span></code>’s weights, you can do:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">dense0</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">)</span>
<span class="n">dense1</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">)</span>
<span class="n">dense1</span><span class="o">.</span><span class="n">share_parameters</span><span class="p">(</span><span class="n">dense0</span><span class="o">.</span><span class="n">collect_params</span><span class="p">())</span>
</pre></div>
</div>
<dl class="simple">
<dt>which equals to</dt><dd><p>dense1.weight = dense0.weight
dense1.bias = dense0.bias</p>
</dd>
</dl>
<p>Note that unlike the <cite>load_parameters</cite> or <cite>load_dict</cite> functions,
<cite>share_parameters</cite> results in the <cite>Parameter</cite> object being shared (or
tied) between the models, whereas <cite>load_parameters</cite> or <cite>load_dict</cite> only
set the value of the data dictionary of a model. If you call
<cite>load_parameters</cite> or <cite>load_dict</cite> after <cite>share_parameters</cite>, the loaded
value will be reflected in all networks that use the shared (or tied)
<cite>Parameter</cite> object.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>shared</strong> (<em>Dict</em>) – Dict of the shared parameters.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>this block</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HuberLoss.summary">
<code class="sig-name descname">summary</code><span class="sig-paren">(</span><em class="sig-param">*inputs</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HuberLoss.summary" title="Permalink to this definition"></a></dt>
<dd><p>Print the summary of the model’s output and parameters.</p>
<p>The network must have been initialized, and must not have been hybridized.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>inputs</strong> (<em>object</em>) – Any input that the model supports. For any tensor in the input, only
<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.ndarray.NDArray</span></code></a> is supported.</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HuberLoss.zero_grad">
<code class="sig-name descname">zero_grad</code><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HuberLoss.zero_grad" title="Permalink to this definition"></a></dt>
<dd><p>Sets all Parameters’ gradient buffer to 0.</p>
</dd></dl>
</dd></dl>
<dl class="class">
<dt id="mxnet.gluon.loss.HingeLoss">
<em class="property">class </em><code class="sig-name descname">HingeLoss</code><span class="sig-paren">(</span><em class="sig-param">margin=1</em>, <em class="sig-param">weight=None</em>, <em class="sig-param">batch_axis=0</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#HingeLoss"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.HingeLoss" title="Permalink to this definition"></a></dt>
<dd><p>Bases: <a class="reference internal" href="#mxnet.gluon.loss.Loss" title="mxnet.gluon.loss.Loss"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.loss.Loss</span></code></a></p>
<p>Calculates the hinge loss function often used in SVMs:</p>
<div class="math notranslate nohighlight">
\[L = \sum_i max(0, {margin} - {pred}_i \cdot {label}_i)\]</div>
<p><strong>Methods</strong></p>
<table class="longtable docutils align-default">
<colgroup>
<col style="width: 10%" />
<col style="width: 90%" />
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HingeLoss.apply" title="mxnet.gluon.loss.HingeLoss.apply"><code class="xref py py-obj docutils literal notranslate"><span class="pre">apply</span></code></a>(fn)</p></td>
<td><p>Applies <code class="docutils literal notranslate"><span class="pre">fn</span></code> recursively to every child block as well as self.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HingeLoss.collect_params" title="mxnet.gluon.loss.HingeLoss.collect_params"><code class="xref py py-obj docutils literal notranslate"><span class="pre">collect_params</span></code></a>([select])</p></td>
<td><p>Returns a <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> containing this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and all of its children’s Parameters(default), also can returns the select <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> which match some given regular expressions.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HingeLoss.export" title="mxnet.gluon.loss.HingeLoss.export"><code class="xref py py-obj docutils literal notranslate"><span class="pre">export</span></code></a>(path[, epoch, remove_amp_cast])</p></td>
<td><p>Export HybridBlock to json format that can be loaded by <cite>gluon.SymbolBlock.imports</cite> or the C++ interface.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HingeLoss.forward" title="mxnet.gluon.loss.HingeLoss.forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">forward</span></code></a>(pred, label[, sample_weight])</p></td>
<td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HingeLoss.hybridize" title="mxnet.gluon.loss.HingeLoss.hybridize"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybridize</span></code></a>([active, partition_if_dynamic, …])</p></td>
<td><p>Activates or deactivates <code class="xref py py-class docutils literal notranslate"><span class="pre">HybridBlock</span></code> s recursively.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HingeLoss.infer_shape" title="mxnet.gluon.loss.HingeLoss.infer_shape"><code class="xref py py-obj docutils literal notranslate"><span class="pre">infer_shape</span></code></a>(*args)</p></td>
<td><p>Infers shape of Parameters from inputs.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HingeLoss.infer_type" title="mxnet.gluon.loss.HingeLoss.infer_type"><code class="xref py py-obj docutils literal notranslate"><span class="pre">infer_type</span></code></a>(*args)</p></td>
<td><p>Infers data type of Parameters from inputs.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HingeLoss.initialize" title="mxnet.gluon.loss.HingeLoss.initialize"><code class="xref py py-obj docutils literal notranslate"><span class="pre">initialize</span></code></a>([init, device, verbose, force_reinit])</p></td>
<td><p>Initializes <code class="xref py py-class docutils literal notranslate"><span class="pre">Parameter</span></code> s of this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and its children.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HingeLoss.load" title="mxnet.gluon.loss.HingeLoss.load"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load</span></code></a>(prefix)</p></td>
<td><p>Load a model saved using the <cite>save</cite> API</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HingeLoss.load_dict" title="mxnet.gluon.loss.HingeLoss.load_dict"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load_dict</span></code></a>(param_dict[, device, …])</p></td>
<td><p>Load parameters from dict</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HingeLoss.load_parameters" title="mxnet.gluon.loss.HingeLoss.load_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load_parameters</span></code></a>(filename[, device, …])</p></td>
<td><p>Load parameters from file previously saved by <cite>save_parameters</cite>.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HingeLoss.optimize_for" title="mxnet.gluon.loss.HingeLoss.optimize_for"><code class="xref py py-obj docutils literal notranslate"><span class="pre">optimize_for</span></code></a>(x, *args[, backend, clear, …])</p></td>
<td><p>Partitions the current HybridBlock and optimizes it for a given backend without executing a forward pass.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HingeLoss.register_forward_hook" title="mxnet.gluon.loss.HingeLoss.register_forward_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_forward_hook</span></code></a>(hook)</p></td>
<td><p>Registers a forward hook on the block.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HingeLoss.register_forward_pre_hook" title="mxnet.gluon.loss.HingeLoss.register_forward_pre_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_forward_pre_hook</span></code></a>(hook)</p></td>
<td><p>Registers a forward pre-hook on the block.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HingeLoss.register_op_hook" title="mxnet.gluon.loss.HingeLoss.register_op_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_op_hook</span></code></a>(callback[, monitor_all])</p></td>
<td><p>Install op hook for block recursively.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HingeLoss.reset_ctx" title="mxnet.gluon.loss.HingeLoss.reset_ctx"><code class="xref py py-obj docutils literal notranslate"><span class="pre">reset_ctx</span></code></a>(ctx)</p></td>
<td><p>This function has been deprecated.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HingeLoss.reset_device" title="mxnet.gluon.loss.HingeLoss.reset_device"><code class="xref py py-obj docutils literal notranslate"><span class="pre">reset_device</span></code></a>(device)</p></td>
<td><p>Re-assign all Parameters to other devices.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HingeLoss.save" title="mxnet.gluon.loss.HingeLoss.save"><code class="xref py py-obj docutils literal notranslate"><span class="pre">save</span></code></a>(prefix)</p></td>
<td><p>Save the model architecture and parameters to load again later</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HingeLoss.save_parameters" title="mxnet.gluon.loss.HingeLoss.save_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">save_parameters</span></code></a>(filename[, deduplicate])</p></td>
<td><p>Save parameters to file.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HingeLoss.setattr" title="mxnet.gluon.loss.HingeLoss.setattr"><code class="xref py py-obj docutils literal notranslate"><span class="pre">setattr</span></code></a>(name, value)</p></td>
<td><p>Set an attribute to a new value for all Parameters.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HingeLoss.share_parameters" title="mxnet.gluon.loss.HingeLoss.share_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">share_parameters</span></code></a>(shared)</p></td>
<td><p>Share parameters recursively inside the model.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HingeLoss.summary" title="mxnet.gluon.loss.HingeLoss.summary"><code class="xref py py-obj docutils literal notranslate"><span class="pre">summary</span></code></a>(*inputs)</p></td>
<td><p>Print the summary of the model’s output and parameters.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HingeLoss.zero_grad" title="mxnet.gluon.loss.HingeLoss.zero_grad"><code class="xref py py-obj docutils literal notranslate"><span class="pre">zero_grad</span></code></a>()</p></td>
<td><p>Sets all Parameters’ gradient buffer to 0.</p></td>
</tr>
</tbody>
</table>
<p><strong>Attributes</strong></p>
<table class="longtable docutils align-default">
<colgroup>
<col style="width: 10%" />
<col style="width: 90%" />
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HingeLoss.params" title="mxnet.gluon.loss.HingeLoss.params"><code class="xref py py-obj docutils literal notranslate"><span class="pre">params</span></code></a></p></td>
<td><p>Returns this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code>’s parameter dictionary (does not include its children’s parameters).</p></td>
</tr>
</tbody>
</table>
<p>where <cite>pred</cite> is the classifier prediction and <cite>label</cite> is the target tensor
containing values -1 or 1. <cite>label</cite> and <cite>pred</cite> must have the same number of
elements.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>margin</strong> (<em>float</em>) – The margin in hinge loss. Defaults to 1.0</p></li>
<li><p><strong>weight</strong> (<em>float</em><em> or </em><em>None</em>) – Global scalar weight for loss.</p></li>
<li><p><strong>batch_axis</strong> (<em>int</em><em>, </em><em>default 0</em>) – The axis that represents mini-batch.</p></li>
</ul>
</dd>
</dl>
<dl class="simple">
<dt>Inputs:</dt><dd><ul class="simple">
<li><p><strong>pred</strong>: prediction tensor with arbitrary shape.</p></li>
<li><p><strong>label</strong>: truth tensor with values -1 or 1. Must have the same size
as pred.</p></li>
<li><p><strong>sample_weight</strong>: element-wise weighting tensor. Must be broadcastable
to the same shape as pred. For example, if pred has shape (64, 10)
and you want to weigh each sample in the batch separately,
sample_weight should have shape (64, 1).</p></li>
</ul>
</dd>
<dt>Outputs:</dt><dd><ul class="simple">
<li><p><strong>loss</strong>: loss tensor with shape (batch_size,). Dimenions other than
batch_axis are averaged out.</p></li>
</ul>
</dd>
</dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HingeLoss.apply">
<code class="sig-name descname">apply</code><span class="sig-paren">(</span><em class="sig-param">fn</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HingeLoss.apply" title="Permalink to this definition"></a></dt>
<dd><p>Applies <code class="docutils literal notranslate"><span class="pre">fn</span></code> recursively to every child block as well as self.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>fn</strong> (<em>callable</em>) – Function to be applied to each submodule, of form <cite>fn(block)</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>this block</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HingeLoss.collect_params">
<code class="sig-name descname">collect_params</code><span class="sig-paren">(</span><em class="sig-param">select=None</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HingeLoss.collect_params" title="Permalink to this definition"></a></dt>
<dd><p>Returns a <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> containing this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and all of its
children’s Parameters(default), also can returns the select <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code>
which match some given regular expressions.</p>
<p>For example, collect the specified parameters in [‘conv1.weight’, ‘conv1.bias’, ‘fc.weight’,
‘fc.bias’]:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">collect_params</span><span class="p">(</span><span class="s1">&#39;conv1.weight|conv1.bias|fc.weight|fc.bias&#39;</span><span class="p">)</span>
</pre></div>
</div>
<p>or collect all parameters whose names end with ‘weight’ or ‘bias’, this can be done
using regular expressions:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">collect_params</span><span class="p">(</span><span class="s1">&#39;.*weight|.*bias&#39;</span><span class="p">)</span>
</pre></div>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>select</strong> (<em>str</em>) – regular expressions</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>The selected <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HingeLoss.export">
<code class="sig-name descname">export</code><span class="sig-paren">(</span><em class="sig-param">path</em>, <em class="sig-param">epoch=0</em>, <em class="sig-param">remove_amp_cast=True</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HingeLoss.export" title="Permalink to this definition"></a></dt>
<dd><p>Export HybridBlock to json format that can be loaded by
<cite>gluon.SymbolBlock.imports</cite> or the C++ interface.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>When there are only one input, it will have name <cite>data</cite>. When there
Are more than one inputs, they will be named as <cite>data0</cite>, <cite>data1</cite>, etc.</p>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>path</strong> (<em>str</em><em> or </em><em>None</em>) – Path to save model. Two files <cite>path-symbol.json</cite> and <cite>path-xxxx.params</cite>
will be created, where xxxx is the 4 digits epoch number.
If None, do not export to file but return Python Symbol object and
corresponding dictionary of parameters.</p></li>
<li><p><strong>epoch</strong> (<em>int</em>) – Epoch number of saved model.</p></li>
<li><p><strong>remove_amp_cast</strong> (<em>bool</em><em>, </em><em>optional</em>) – Whether to remove the amp_cast and amp_multicast operators, before saving the model.</p></li>
</ul>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p><ul class="simple">
<li><p><strong>symbol_filename</strong> (<em>str</em>) – Filename to which model symbols were saved, including <cite>path</cite> prefix.</p></li>
<li><p><strong>params_filename</strong> (<em>str</em>) – Filename to which model parameters were saved, including <cite>path</cite> prefix.</p></li>
</ul>
</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HingeLoss.forward">
<code class="sig-name descname">forward</code><span class="sig-paren">(</span><em class="sig-param">pred</em>, <em class="sig-param">label</em>, <em class="sig-param">sample_weight=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#HingeLoss.forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.HingeLoss.forward" title="Permalink to this definition"></a></dt>
<dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>x</strong> (<a class="reference internal" href="../../legacy/symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li>
<li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HingeLoss.hybridize">
<code class="sig-name descname">hybridize</code><span class="sig-paren">(</span><em class="sig-param">active=True</em>, <em class="sig-param">partition_if_dynamic=True</em>, <em class="sig-param">static_alloc=False</em>, <em class="sig-param">static_shape=False</em>, <em class="sig-param">inline_limit=2</em>, <em class="sig-param">forward_bulk_size=None</em>, <em class="sig-param">backward_bulk_size=None</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HingeLoss.hybridize" title="Permalink to this definition"></a></dt>
<dd><p>Activates or deactivates <code class="xref py py-class docutils literal notranslate"><span class="pre">HybridBlock</span></code> s recursively. Has no effect on
non-hybrid children.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>active</strong> (<em>bool</em><em>, </em><em>default True</em>) – Whether to turn hybrid on or off.</p></li>
<li><p><strong>partition_if_dynamic</strong> (<em>bool</em><em>, </em><em>default False</em>) – whether to partition the graph when dynamic shape op exists</p></li>
<li><p><strong>static_alloc</strong> (<em>bool</em><em>, </em><em>default False</em>) – Statically allocate memory to improve speed. Memory usage may increase.</p></li>
<li><p><strong>static_shape</strong> (<em>bool</em><em>, </em><em>default False</em>) – Optimize for invariant input shapes between iterations. Must also
set static_alloc to True. Change of input shapes is still allowed
but slower.</p></li>
<li><p><strong>inline_limit</strong> (<em>optional int</em><em>, </em><em>default 2</em>) – Maximum number of operators that can be inlined.</p></li>
<li><p><strong>forward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during forward pass.</p></li>
<li><p><strong>backward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during backward pass.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HingeLoss.infer_shape">
<code class="sig-name descname">infer_shape</code><span class="sig-paren">(</span><em class="sig-param">*args</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HingeLoss.infer_shape" title="Permalink to this definition"></a></dt>
<dd><p>Infers shape of Parameters from inputs.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HingeLoss.infer_type">
<code class="sig-name descname">infer_type</code><span class="sig-paren">(</span><em class="sig-param">*args</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HingeLoss.infer_type" title="Permalink to this definition"></a></dt>
<dd><p>Infers data type of Parameters from inputs.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HingeLoss.initialize">
<code class="sig-name descname">initialize</code><span class="sig-paren">(</span><em class="sig-param">init=&lt;mxnet.initializer.Uniform object&gt;</em>, <em class="sig-param">device=None</em>, <em class="sig-param">verbose=False</em>, <em class="sig-param">force_reinit=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HingeLoss.initialize" title="Permalink to this definition"></a></dt>
<dd><p>Initializes <code class="xref py py-class docutils literal notranslate"><span class="pre">Parameter</span></code> s of this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and its children.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>init</strong> (<a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Global default Initializer to be used when <code class="xref py py-meth docutils literal notranslate"><span class="pre">Parameter.init()</span></code> is <code class="docutils literal notranslate"><span class="pre">None</span></code>.
Otherwise, <code class="xref py py-meth docutils literal notranslate"><span class="pre">Parameter.init()</span></code> takes precedence.</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em> or </em><em>list of Device</em>) – Keeps a copy of Parameters on one or many device(s).</p></li>
<li><p><strong>verbose</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to verbosely print out details on initialization.</p></li>
<li><p><strong>force_reinit</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to force re-initialization if parameter is already initialized.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HingeLoss.load">
<code class="sig-name descname">load</code><span class="sig-paren">(</span><em class="sig-param">prefix</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HingeLoss.load" title="Permalink to this definition"></a></dt>
<dd><p>Load a model saved using the <cite>save</cite> API</p>
<p>Reconfigures a model using the saved configuration. This function
does not regenerate the model architecture. It resets each Block’s
parameter UUIDs as they were when saved in order to match the names of the
saved parameters.</p>
<p>This function assumes the Blocks in the model were created in the same
order they were when the model was saved. This is because each Block is
uniquely identified by Block class name and a unique ID in order (since
its an OrderedDict) and uses the unique ID to denote that specific Block.</p>
<p>Assumes that the model is created in an identical order every time.
If the model is not able to be recreated deterministically do not
use this set of APIs to save/load your model.</p>
<p>For HybridBlocks, the cached_graph (Symbol &amp; inputs) and settings are
restored if it had been hybridized before saving.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>prefix</strong> (<em>str</em>) – The prefix to use in filenames for loading this model:
&lt;prefix&gt;-model.json and &lt;prefix&gt;-model.params</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HingeLoss.load_dict">
<code class="sig-name descname">load_dict</code><span class="sig-paren">(</span><em class="sig-param">param_dict</em>, <em class="sig-param">device=None</em>, <em class="sig-param">allow_missing=False</em>, <em class="sig-param">ignore_extra=False</em>, <em class="sig-param">cast_dtype=False</em>, <em class="sig-param">dtype_source='current'</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HingeLoss.load_dict" title="Permalink to this definition"></a></dt>
<dd><p>Load parameters from dict</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>param_dict</strong> (<em>dict</em>) – Dictionary containing model parameters</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em>, </em><em>optional</em>) – Device context on which the memory is allocated. Default is
<cite>mxnet.device.current_device()</cite>.</p></li>
<li><p><strong>allow_missing</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently skip loading parameters not represented in the file.</p></li>
<li><p><strong>ignore_extra</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently ignore parameters from the file that are not
present in this dict.</p></li>
<li><p><strong>cast_dtype</strong> (<em>bool</em><em>, </em><em>default False</em>) – Cast the data type of the NDArray loaded from the checkpoint to the dtype
provided by the Parameter if any</p></li>
<li><p><strong>dtype_source</strong> (<em>str</em><em>, </em><em>default 'current'</em>) – must be in {‘current’, ‘saved’}
Only valid if cast_dtype=True, specify the source of the dtype for casting
the parameters</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HingeLoss.load_parameters">
<code class="sig-name descname">load_parameters</code><span class="sig-paren">(</span><em class="sig-param">filename</em>, <em class="sig-param">device=None</em>, <em class="sig-param">allow_missing=False</em>, <em class="sig-param">ignore_extra=False</em>, <em class="sig-param">cast_dtype=False</em>, <em class="sig-param">dtype_source='current'</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HingeLoss.load_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Load parameters from file previously saved by <cite>save_parameters</cite>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>filename</strong> (<em>str</em>) – Path to parameter file.</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em> or </em><em>list of Device</em><em>, </em><em>default cpu</em><em>(</em><em>)</em>) – Device(s) to initialize loaded parameters on.</p></li>
<li><p><strong>allow_missing</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently skip loading parameters not represents in the file.</p></li>
<li><p><strong>ignore_extra</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently ignore parameters from the file that are not
present in this Block.</p></li>
<li><p><strong>cast_dtype</strong> (<em>bool</em><em>, </em><em>default False</em>) – Cast the data type of the NDArray loaded from the checkpoint to the dtype
provided by the Parameter if any.</p></li>
<li><p><strong>dtype_source</strong> (<em>str</em><em>, </em><em>default 'current'</em>) – must be in {‘current’, ‘saved’}
Only valid if cast_dtype=True, specify the source of the dtype for casting
the parameters</p></li>
</ul>
</dd>
</dl>
<p class="rubric">References</p>
<p><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HingeLoss.optimize_for">
<code class="sig-name descname">optimize_for</code><span class="sig-paren">(</span><em class="sig-param">x</em>, <em class="sig-param">*args</em>, <em class="sig-param">backend=None</em>, <em class="sig-param">clear=False</em>, <em class="sig-param">partition_if_dynamic=True</em>, <em class="sig-param">static_alloc=False</em>, <em class="sig-param">static_shape=False</em>, <em class="sig-param">inline_limit=2</em>, <em class="sig-param">forward_bulk_size=None</em>, <em class="sig-param">backward_bulk_size=None</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HingeLoss.optimize_for" title="Permalink to this definition"></a></dt>
<dd><p>Partitions the current HybridBlock and optimizes it for a given backend
without executing a forward pass. Modifies the HybridBlock in-place.</p>
<p>Immediately partitions a HybridBlock using the specified backend. Combines
the work done in the hybridize API with part of the work done in the forward
pass without calling the CachedOp. Can be used in place of hybridize,
afterwards <cite>export</cite> can be called or inference can be run. See README.md in
example/extensions/lib_subgraph/README.md for more details.</p>
<p class="rubric">Examples</p>
<p># partition and then export to file
block.optimize_for(x, backend=’myPart’)
block.export(‘partitioned’)</p>
<p># partition and then run inference
block.optimize_for(x, backend=’myPart’)
block(x)</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>x</strong> (<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – first input to model</p></li>
<li><p><strong>*args</strong> (<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – other inputs to model</p></li>
<li><p><strong>backend</strong> (<em>str</em>) – The name of backend, as registered in <cite>SubgraphBackendRegistry</cite>, default None</p></li>
<li><p><strong>backend_opts</strong> (<em>dict of user-specified options to pass to the backend for partitioning</em><em>, </em><em>optional</em>) – Passed on to <cite>PrePartition</cite> and <cite>PostPartition</cite> functions of <cite>SubgraphProperty</cite></p></li>
<li><p><strong>clear</strong> (<em>bool</em><em>, </em><em>default False</em>) – clears any previous optimizations</p></li>
<li><p><strong>partition_if_dynamic</strong> (<em>bool</em><em>, </em><em>default False</em>) – whether to partition the graph when dynamic shape op exists</p></li>
<li><p><strong>static_alloc</strong> (<em>bool</em><em>, </em><em>default False</em>) – Statically allocate memory to improve speed. Memory usage may increase.</p></li>
<li><p><strong>static_shape</strong> (<em>bool</em><em>, </em><em>default False</em>) – Optimize for invariant input shapes between iterations. Must also
set static_alloc to True. Change of input shapes is still allowed
but slower.</p></li>
<li><p><strong>inline_limit</strong> (<em>optional int</em><em>, </em><em>default 2</em>) – Maximum number of operators that can be inlined.</p></li>
<li><p><strong>forward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during forward pass.</p></li>
<li><p><strong>backward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during backward pass.</p></li>
<li><p><strong>**kwargs</strong> (<em>The backend options</em><em>, </em><em>optional</em>) – Passed on to <cite>PrePartition</cite> and <cite>PostPartition</cite> functions of <cite>SubgraphProperty</cite></p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HingeLoss.params">
<em class="property">property </em><code class="sig-name descname">params</code><a class="headerlink" href="#mxnet.gluon.loss.HingeLoss.params" title="Permalink to this definition"></a></dt>
<dd><p>Returns this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code>’s parameter dictionary (does not include its
children’s parameters).</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HingeLoss.register_forward_hook">
<code class="sig-name descname">register_forward_hook</code><span class="sig-paren">(</span><em class="sig-param">hook</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HingeLoss.register_forward_hook" title="Permalink to this definition"></a></dt>
<dd><p>Registers a forward hook on the block.</p>
<p>The hook function is called immediately after <a class="reference internal" href="#mxnet.gluon.loss.HingeLoss.forward" title="mxnet.gluon.loss.HingeLoss.forward"><code class="xref py py-func docutils literal notranslate"><span class="pre">forward()</span></code></a>.
It should not modify the input or output.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>hook</strong> (<em>callable</em>) – The forward hook function of form <cite>hook(block, input, output) -&gt; None</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.utils.HookHandle</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HingeLoss.register_forward_pre_hook">
<code class="sig-name descname">register_forward_pre_hook</code><span class="sig-paren">(</span><em class="sig-param">hook</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HingeLoss.register_forward_pre_hook" title="Permalink to this definition"></a></dt>
<dd><p>Registers a forward pre-hook on the block.</p>
<p>The hook function is called immediately before <a class="reference internal" href="#mxnet.gluon.loss.HingeLoss.forward" title="mxnet.gluon.loss.HingeLoss.forward"><code class="xref py py-func docutils literal notranslate"><span class="pre">forward()</span></code></a>.
It should not modify the input or output.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>hook</strong> (<em>callable</em>) – The forward hook function of form <cite>hook(block, input) -&gt; None</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.utils.HookHandle</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HingeLoss.register_op_hook">
<code class="sig-name descname">register_op_hook</code><span class="sig-paren">(</span><em class="sig-param">callback</em>, <em class="sig-param">monitor_all=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HingeLoss.register_op_hook" title="Permalink to this definition"></a></dt>
<dd><p>Install op hook for block recursively.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>callback</strong> (<em>function</em>) – Function called to inspect the values of the intermediate outputs
of blocks after hybridization. It takes 3 parameters:
name of the tensor being inspected (str)
name of the operator producing or consuming that tensor (str)
tensor being inspected (NDArray).</p></li>
<li><p><strong>monitor_all</strong> (<em>bool</em><em>, </em><em>default False</em>) – If True, monitor both input and output, otherwise monitor output only.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HingeLoss.reset_ctx">
<code class="sig-name descname">reset_ctx</code><span class="sig-paren">(</span><em class="sig-param">ctx</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HingeLoss.reset_ctx" title="Permalink to this definition"></a></dt>
<dd><p>This function has been deprecated. Please refer to <code class="docutils literal notranslate"><span class="pre">HybridBlock.reset_device</span></code>.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HingeLoss.reset_device">
<code class="sig-name descname">reset_device</code><span class="sig-paren">(</span><em class="sig-param">device</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HingeLoss.reset_device" title="Permalink to this definition"></a></dt>
<dd><p>Re-assign all Parameters to other devices. If the Block is hybridized, it will reset the _cached_op_args.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>device</strong> (Device or list of Device, default <code class="xref py py-meth docutils literal notranslate"><span class="pre">device.current_device()</span></code>.) – Assign Parameter to given device. If device is a list of Device, a
copy will be made for each device.</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HingeLoss.save">
<code class="sig-name descname">save</code><span class="sig-paren">(</span><em class="sig-param">prefix</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HingeLoss.save" title="Permalink to this definition"></a></dt>
<dd><p>Save the model architecture and parameters to load again later</p>
<p>Saves the model architecture as a nested dictionary where each Block
in the model is a dictionary and its children are sub-dictionaries.</p>
<p>Each Block is uniquely identified by Block class name and a unique ID.
We save each Block’s parameter UUID to restore later in order to match
the saved parameters.</p>
<p>Recursively traverses a Block’s children in order (since its an
OrderedDict) and uses the unique ID to denote that specific Block.</p>
<p>Assumes that the model is created in an identical order every time.
If the model is not able to be recreated deterministically do not
use this set of APIs to save/load your model.</p>
<p>For HybridBlocks, the cached_graph is saved (Symbol &amp; inputs) if
it has already been hybridized.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>prefix</strong> (<em>str</em>) – The prefix to use in filenames for saving this model:
&lt;prefix&gt;-model.json and &lt;prefix&gt;-model.params</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HingeLoss.save_parameters">
<code class="sig-name descname">save_parameters</code><span class="sig-paren">(</span><em class="sig-param">filename</em>, <em class="sig-param">deduplicate=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HingeLoss.save_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Save parameters to file.</p>
<p>Saved parameters can only be loaded with <cite>load_parameters</cite>. Note that this
method only saves parameters, not model structure. If you want to save
model structures, please use <code class="xref py py-meth docutils literal notranslate"><span class="pre">HybridBlock.export()</span></code>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>filename</strong> (<em>str</em>) – Path to file.</p></li>
<li><p><strong>deduplicate</strong> (<em>bool</em><em>, </em><em>default False</em>) – If True, save shared parameters only once. Otherwise, if a Block
contains multiple sub-blocks that share parameters, each of the
shared parameters will be separately saved for every sub-block.</p></li>
</ul>
</dd>
</dl>
<p class="rubric">References</p>
<p><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HingeLoss.setattr">
<code class="sig-name descname">setattr</code><span class="sig-paren">(</span><em class="sig-param">name</em>, <em class="sig-param">value</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HingeLoss.setattr" title="Permalink to this definition"></a></dt>
<dd><p>Set an attribute to a new value for all Parameters.</p>
<p>For example, set grad_req to null if you don’t need gradient w.r.t a
model’s Parameters:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">setattr</span><span class="p">(</span><span class="s1">&#39;grad_req&#39;</span><span class="p">,</span> <span class="s1">&#39;null&#39;</span><span class="p">)</span>
</pre></div>
</div>
<p>or change the learning rate multiplier:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">setattr</span><span class="p">(</span><span class="s1">&#39;lr_mult&#39;</span><span class="p">,</span> <span class="mf">0.5</span><span class="p">)</span>
</pre></div>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>name</strong> (<em>str</em>) – Name of the attribute.</p></li>
<li><p><strong>value</strong> (<em>valid type for attribute name</em>) – The new value for the attribute.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HingeLoss.share_parameters">
<code class="sig-name descname">share_parameters</code><span class="sig-paren">(</span><em class="sig-param">shared</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HingeLoss.share_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Share parameters recursively inside the model.</p>
<p>For example, if you want <code class="docutils literal notranslate"><span class="pre">dense1</span></code> to share <code class="docutils literal notranslate"><span class="pre">dense0</span></code>’s weights, you can do:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">dense0</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">)</span>
<span class="n">dense1</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">)</span>
<span class="n">dense1</span><span class="o">.</span><span class="n">share_parameters</span><span class="p">(</span><span class="n">dense0</span><span class="o">.</span><span class="n">collect_params</span><span class="p">())</span>
</pre></div>
</div>
<dl class="simple">
<dt>which equals to</dt><dd><p>dense1.weight = dense0.weight
dense1.bias = dense0.bias</p>
</dd>
</dl>
<p>Note that unlike the <cite>load_parameters</cite> or <cite>load_dict</cite> functions,
<cite>share_parameters</cite> results in the <cite>Parameter</cite> object being shared (or
tied) between the models, whereas <cite>load_parameters</cite> or <cite>load_dict</cite> only
set the value of the data dictionary of a model. If you call
<cite>load_parameters</cite> or <cite>load_dict</cite> after <cite>share_parameters</cite>, the loaded
value will be reflected in all networks that use the shared (or tied)
<cite>Parameter</cite> object.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>shared</strong> (<em>Dict</em>) – Dict of the shared parameters.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>this block</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HingeLoss.summary">
<code class="sig-name descname">summary</code><span class="sig-paren">(</span><em class="sig-param">*inputs</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HingeLoss.summary" title="Permalink to this definition"></a></dt>
<dd><p>Print the summary of the model’s output and parameters.</p>
<p>The network must have been initialized, and must not have been hybridized.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>inputs</strong> (<em>object</em>) – Any input that the model supports. For any tensor in the input, only
<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.ndarray.NDArray</span></code></a> is supported.</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.HingeLoss.zero_grad">
<code class="sig-name descname">zero_grad</code><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.HingeLoss.zero_grad" title="Permalink to this definition"></a></dt>
<dd><p>Sets all Parameters’ gradient buffer to 0.</p>
</dd></dl>
</dd></dl>
<dl class="class">
<dt id="mxnet.gluon.loss.SquaredHingeLoss">
<em class="property">class </em><code class="sig-name descname">SquaredHingeLoss</code><span class="sig-paren">(</span><em class="sig-param">margin=1</em>, <em class="sig-param">weight=None</em>, <em class="sig-param">batch_axis=0</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#SquaredHingeLoss"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.SquaredHingeLoss" title="Permalink to this definition"></a></dt>
<dd><p>Bases: <a class="reference internal" href="#mxnet.gluon.loss.Loss" title="mxnet.gluon.loss.Loss"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.loss.Loss</span></code></a></p>
<p>Calculates the soft-margin loss function used in SVMs:</p>
<div class="math notranslate nohighlight">
\[L = \sum_i max(0, {margin} - {pred}_i \cdot {label}_i)^2\]</div>
<p><strong>Methods</strong></p>
<table class="longtable docutils align-default">
<colgroup>
<col style="width: 10%" />
<col style="width: 90%" />
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SquaredHingeLoss.apply" title="mxnet.gluon.loss.SquaredHingeLoss.apply"><code class="xref py py-obj docutils literal notranslate"><span class="pre">apply</span></code></a>(fn)</p></td>
<td><p>Applies <code class="docutils literal notranslate"><span class="pre">fn</span></code> recursively to every child block as well as self.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SquaredHingeLoss.collect_params" title="mxnet.gluon.loss.SquaredHingeLoss.collect_params"><code class="xref py py-obj docutils literal notranslate"><span class="pre">collect_params</span></code></a>([select])</p></td>
<td><p>Returns a <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> containing this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and all of its children’s Parameters(default), also can returns the select <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> which match some given regular expressions.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SquaredHingeLoss.export" title="mxnet.gluon.loss.SquaredHingeLoss.export"><code class="xref py py-obj docutils literal notranslate"><span class="pre">export</span></code></a>(path[, epoch, remove_amp_cast])</p></td>
<td><p>Export HybridBlock to json format that can be loaded by <cite>gluon.SymbolBlock.imports</cite> or the C++ interface.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SquaredHingeLoss.forward" title="mxnet.gluon.loss.SquaredHingeLoss.forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">forward</span></code></a>(pred, label[, sample_weight])</p></td>
<td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SquaredHingeLoss.hybridize" title="mxnet.gluon.loss.SquaredHingeLoss.hybridize"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybridize</span></code></a>([active, partition_if_dynamic, …])</p></td>
<td><p>Activates or deactivates <code class="xref py py-class docutils literal notranslate"><span class="pre">HybridBlock</span></code> s recursively.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SquaredHingeLoss.infer_shape" title="mxnet.gluon.loss.SquaredHingeLoss.infer_shape"><code class="xref py py-obj docutils literal notranslate"><span class="pre">infer_shape</span></code></a>(*args)</p></td>
<td><p>Infers shape of Parameters from inputs.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SquaredHingeLoss.infer_type" title="mxnet.gluon.loss.SquaredHingeLoss.infer_type"><code class="xref py py-obj docutils literal notranslate"><span class="pre">infer_type</span></code></a>(*args)</p></td>
<td><p>Infers data type of Parameters from inputs.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SquaredHingeLoss.initialize" title="mxnet.gluon.loss.SquaredHingeLoss.initialize"><code class="xref py py-obj docutils literal notranslate"><span class="pre">initialize</span></code></a>([init, device, verbose, force_reinit])</p></td>
<td><p>Initializes <code class="xref py py-class docutils literal notranslate"><span class="pre">Parameter</span></code> s of this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and its children.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SquaredHingeLoss.load" title="mxnet.gluon.loss.SquaredHingeLoss.load"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load</span></code></a>(prefix)</p></td>
<td><p>Load a model saved using the <cite>save</cite> API</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SquaredHingeLoss.load_dict" title="mxnet.gluon.loss.SquaredHingeLoss.load_dict"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load_dict</span></code></a>(param_dict[, device, …])</p></td>
<td><p>Load parameters from dict</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SquaredHingeLoss.load_parameters" title="mxnet.gluon.loss.SquaredHingeLoss.load_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load_parameters</span></code></a>(filename[, device, …])</p></td>
<td><p>Load parameters from file previously saved by <cite>save_parameters</cite>.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SquaredHingeLoss.optimize_for" title="mxnet.gluon.loss.SquaredHingeLoss.optimize_for"><code class="xref py py-obj docutils literal notranslate"><span class="pre">optimize_for</span></code></a>(x, *args[, backend, clear, …])</p></td>
<td><p>Partitions the current HybridBlock and optimizes it for a given backend without executing a forward pass.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SquaredHingeLoss.register_forward_hook" title="mxnet.gluon.loss.SquaredHingeLoss.register_forward_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_forward_hook</span></code></a>(hook)</p></td>
<td><p>Registers a forward hook on the block.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SquaredHingeLoss.register_forward_pre_hook" title="mxnet.gluon.loss.SquaredHingeLoss.register_forward_pre_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_forward_pre_hook</span></code></a>(hook)</p></td>
<td><p>Registers a forward pre-hook on the block.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SquaredHingeLoss.register_op_hook" title="mxnet.gluon.loss.SquaredHingeLoss.register_op_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_op_hook</span></code></a>(callback[, monitor_all])</p></td>
<td><p>Install op hook for block recursively.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SquaredHingeLoss.reset_ctx" title="mxnet.gluon.loss.SquaredHingeLoss.reset_ctx"><code class="xref py py-obj docutils literal notranslate"><span class="pre">reset_ctx</span></code></a>(ctx)</p></td>
<td><p>This function has been deprecated.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SquaredHingeLoss.reset_device" title="mxnet.gluon.loss.SquaredHingeLoss.reset_device"><code class="xref py py-obj docutils literal notranslate"><span class="pre">reset_device</span></code></a>(device)</p></td>
<td><p>Re-assign all Parameters to other devices.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SquaredHingeLoss.save" title="mxnet.gluon.loss.SquaredHingeLoss.save"><code class="xref py py-obj docutils literal notranslate"><span class="pre">save</span></code></a>(prefix)</p></td>
<td><p>Save the model architecture and parameters to load again later</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SquaredHingeLoss.save_parameters" title="mxnet.gluon.loss.SquaredHingeLoss.save_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">save_parameters</span></code></a>(filename[, deduplicate])</p></td>
<td><p>Save parameters to file.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SquaredHingeLoss.setattr" title="mxnet.gluon.loss.SquaredHingeLoss.setattr"><code class="xref py py-obj docutils literal notranslate"><span class="pre">setattr</span></code></a>(name, value)</p></td>
<td><p>Set an attribute to a new value for all Parameters.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SquaredHingeLoss.share_parameters" title="mxnet.gluon.loss.SquaredHingeLoss.share_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">share_parameters</span></code></a>(shared)</p></td>
<td><p>Share parameters recursively inside the model.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SquaredHingeLoss.summary" title="mxnet.gluon.loss.SquaredHingeLoss.summary"><code class="xref py py-obj docutils literal notranslate"><span class="pre">summary</span></code></a>(*inputs)</p></td>
<td><p>Print the summary of the model’s output and parameters.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SquaredHingeLoss.zero_grad" title="mxnet.gluon.loss.SquaredHingeLoss.zero_grad"><code class="xref py py-obj docutils literal notranslate"><span class="pre">zero_grad</span></code></a>()</p></td>
<td><p>Sets all Parameters’ gradient buffer to 0.</p></td>
</tr>
</tbody>
</table>
<p><strong>Attributes</strong></p>
<table class="longtable docutils align-default">
<colgroup>
<col style="width: 10%" />
<col style="width: 90%" />
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SquaredHingeLoss.params" title="mxnet.gluon.loss.SquaredHingeLoss.params"><code class="xref py py-obj docutils literal notranslate"><span class="pre">params</span></code></a></p></td>
<td><p>Returns this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code>’s parameter dictionary (does not include its children’s parameters).</p></td>
</tr>
</tbody>
</table>
<p>where <cite>pred</cite> is the classifier prediction and <cite>label</cite> is the target tensor
containing values -1 or 1. <cite>label</cite> and <cite>pred</cite> can have arbitrary shape as
long as they have the same number of elements.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>margin</strong> (<em>float</em>) – The margin in hinge loss. Defaults to 1.0</p></li>
<li><p><strong>weight</strong> (<em>float</em><em> or </em><em>None</em>) – Global scalar weight for loss.</p></li>
<li><p><strong>batch_axis</strong> (<em>int</em><em>, </em><em>default 0</em>) – The axis that represents mini-batch.</p></li>
</ul>
</dd>
</dl>
<dl class="simple">
<dt>Inputs:</dt><dd><ul class="simple">
<li><p><strong>pred</strong>: prediction tensor with arbitrary shape</p></li>
<li><p><strong>label</strong>: truth tensor with values -1 or 1. Must have the same size
as pred.</p></li>
<li><p><strong>sample_weight</strong>: element-wise weighting tensor. Must be broadcastable
to the same shape as pred. For example, if pred has shape (64, 10)
and you want to weigh each sample in the batch separately,
sample_weight should have shape (64, 1).</p></li>
</ul>
</dd>
<dt>Outputs:</dt><dd><ul class="simple">
<li><p><strong>loss</strong>: loss tensor with shape (batch_size,). Dimenions other than
batch_axis are averaged out.</p></li>
</ul>
</dd>
</dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SquaredHingeLoss.apply">
<code class="sig-name descname">apply</code><span class="sig-paren">(</span><em class="sig-param">fn</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SquaredHingeLoss.apply" title="Permalink to this definition"></a></dt>
<dd><p>Applies <code class="docutils literal notranslate"><span class="pre">fn</span></code> recursively to every child block as well as self.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>fn</strong> (<em>callable</em>) – Function to be applied to each submodule, of form <cite>fn(block)</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>this block</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SquaredHingeLoss.collect_params">
<code class="sig-name descname">collect_params</code><span class="sig-paren">(</span><em class="sig-param">select=None</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SquaredHingeLoss.collect_params" title="Permalink to this definition"></a></dt>
<dd><p>Returns a <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> containing this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and all of its
children’s Parameters(default), also can returns the select <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code>
which match some given regular expressions.</p>
<p>For example, collect the specified parameters in [‘conv1.weight’, ‘conv1.bias’, ‘fc.weight’,
‘fc.bias’]:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">collect_params</span><span class="p">(</span><span class="s1">&#39;conv1.weight|conv1.bias|fc.weight|fc.bias&#39;</span><span class="p">)</span>
</pre></div>
</div>
<p>or collect all parameters whose names end with ‘weight’ or ‘bias’, this can be done
using regular expressions:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">collect_params</span><span class="p">(</span><span class="s1">&#39;.*weight|.*bias&#39;</span><span class="p">)</span>
</pre></div>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>select</strong> (<em>str</em>) – regular expressions</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>The selected <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SquaredHingeLoss.export">
<code class="sig-name descname">export</code><span class="sig-paren">(</span><em class="sig-param">path</em>, <em class="sig-param">epoch=0</em>, <em class="sig-param">remove_amp_cast=True</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SquaredHingeLoss.export" title="Permalink to this definition"></a></dt>
<dd><p>Export HybridBlock to json format that can be loaded by
<cite>gluon.SymbolBlock.imports</cite> or the C++ interface.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>When there are only one input, it will have name <cite>data</cite>. When there
Are more than one inputs, they will be named as <cite>data0</cite>, <cite>data1</cite>, etc.</p>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>path</strong> (<em>str</em><em> or </em><em>None</em>) – Path to save model. Two files <cite>path-symbol.json</cite> and <cite>path-xxxx.params</cite>
will be created, where xxxx is the 4 digits epoch number.
If None, do not export to file but return Python Symbol object and
corresponding dictionary of parameters.</p></li>
<li><p><strong>epoch</strong> (<em>int</em>) – Epoch number of saved model.</p></li>
<li><p><strong>remove_amp_cast</strong> (<em>bool</em><em>, </em><em>optional</em>) – Whether to remove the amp_cast and amp_multicast operators, before saving the model.</p></li>
</ul>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p><ul class="simple">
<li><p><strong>symbol_filename</strong> (<em>str</em>) – Filename to which model symbols were saved, including <cite>path</cite> prefix.</p></li>
<li><p><strong>params_filename</strong> (<em>str</em>) – Filename to which model parameters were saved, including <cite>path</cite> prefix.</p></li>
</ul>
</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SquaredHingeLoss.forward">
<code class="sig-name descname">forward</code><span class="sig-paren">(</span><em class="sig-param">pred</em>, <em class="sig-param">label</em>, <em class="sig-param">sample_weight=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#SquaredHingeLoss.forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.SquaredHingeLoss.forward" title="Permalink to this definition"></a></dt>
<dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>x</strong> (<a class="reference internal" href="../../legacy/symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li>
<li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SquaredHingeLoss.hybridize">
<code class="sig-name descname">hybridize</code><span class="sig-paren">(</span><em class="sig-param">active=True</em>, <em class="sig-param">partition_if_dynamic=True</em>, <em class="sig-param">static_alloc=False</em>, <em class="sig-param">static_shape=False</em>, <em class="sig-param">inline_limit=2</em>, <em class="sig-param">forward_bulk_size=None</em>, <em class="sig-param">backward_bulk_size=None</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SquaredHingeLoss.hybridize" title="Permalink to this definition"></a></dt>
<dd><p>Activates or deactivates <code class="xref py py-class docutils literal notranslate"><span class="pre">HybridBlock</span></code> s recursively. Has no effect on
non-hybrid children.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>active</strong> (<em>bool</em><em>, </em><em>default True</em>) – Whether to turn hybrid on or off.</p></li>
<li><p><strong>partition_if_dynamic</strong> (<em>bool</em><em>, </em><em>default False</em>) – whether to partition the graph when dynamic shape op exists</p></li>
<li><p><strong>static_alloc</strong> (<em>bool</em><em>, </em><em>default False</em>) – Statically allocate memory to improve speed. Memory usage may increase.</p></li>
<li><p><strong>static_shape</strong> (<em>bool</em><em>, </em><em>default False</em>) – Optimize for invariant input shapes between iterations. Must also
set static_alloc to True. Change of input shapes is still allowed
but slower.</p></li>
<li><p><strong>inline_limit</strong> (<em>optional int</em><em>, </em><em>default 2</em>) – Maximum number of operators that can be inlined.</p></li>
<li><p><strong>forward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during forward pass.</p></li>
<li><p><strong>backward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during backward pass.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SquaredHingeLoss.infer_shape">
<code class="sig-name descname">infer_shape</code><span class="sig-paren">(</span><em class="sig-param">*args</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SquaredHingeLoss.infer_shape" title="Permalink to this definition"></a></dt>
<dd><p>Infers shape of Parameters from inputs.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SquaredHingeLoss.infer_type">
<code class="sig-name descname">infer_type</code><span class="sig-paren">(</span><em class="sig-param">*args</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SquaredHingeLoss.infer_type" title="Permalink to this definition"></a></dt>
<dd><p>Infers data type of Parameters from inputs.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SquaredHingeLoss.initialize">
<code class="sig-name descname">initialize</code><span class="sig-paren">(</span><em class="sig-param">init=&lt;mxnet.initializer.Uniform object&gt;</em>, <em class="sig-param">device=None</em>, <em class="sig-param">verbose=False</em>, <em class="sig-param">force_reinit=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SquaredHingeLoss.initialize" title="Permalink to this definition"></a></dt>
<dd><p>Initializes <code class="xref py py-class docutils literal notranslate"><span class="pre">Parameter</span></code> s of this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and its children.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>init</strong> (<a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Global default Initializer to be used when <code class="xref py py-meth docutils literal notranslate"><span class="pre">Parameter.init()</span></code> is <code class="docutils literal notranslate"><span class="pre">None</span></code>.
Otherwise, <code class="xref py py-meth docutils literal notranslate"><span class="pre">Parameter.init()</span></code> takes precedence.</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em> or </em><em>list of Device</em>) – Keeps a copy of Parameters on one or many device(s).</p></li>
<li><p><strong>verbose</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to verbosely print out details on initialization.</p></li>
<li><p><strong>force_reinit</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to force re-initialization if parameter is already initialized.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SquaredHingeLoss.load">
<code class="sig-name descname">load</code><span class="sig-paren">(</span><em class="sig-param">prefix</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SquaredHingeLoss.load" title="Permalink to this definition"></a></dt>
<dd><p>Load a model saved using the <cite>save</cite> API</p>
<p>Reconfigures a model using the saved configuration. This function
does not regenerate the model architecture. It resets each Block’s
parameter UUIDs as they were when saved in order to match the names of the
saved parameters.</p>
<p>This function assumes the Blocks in the model were created in the same
order they were when the model was saved. This is because each Block is
uniquely identified by Block class name and a unique ID in order (since
its an OrderedDict) and uses the unique ID to denote that specific Block.</p>
<p>Assumes that the model is created in an identical order every time.
If the model is not able to be recreated deterministically do not
use this set of APIs to save/load your model.</p>
<p>For HybridBlocks, the cached_graph (Symbol &amp; inputs) and settings are
restored if it had been hybridized before saving.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>prefix</strong> (<em>str</em>) – The prefix to use in filenames for loading this model:
&lt;prefix&gt;-model.json and &lt;prefix&gt;-model.params</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SquaredHingeLoss.load_dict">
<code class="sig-name descname">load_dict</code><span class="sig-paren">(</span><em class="sig-param">param_dict</em>, <em class="sig-param">device=None</em>, <em class="sig-param">allow_missing=False</em>, <em class="sig-param">ignore_extra=False</em>, <em class="sig-param">cast_dtype=False</em>, <em class="sig-param">dtype_source='current'</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SquaredHingeLoss.load_dict" title="Permalink to this definition"></a></dt>
<dd><p>Load parameters from dict</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>param_dict</strong> (<em>dict</em>) – Dictionary containing model parameters</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em>, </em><em>optional</em>) – Device context on which the memory is allocated. Default is
<cite>mxnet.device.current_device()</cite>.</p></li>
<li><p><strong>allow_missing</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently skip loading parameters not represented in the file.</p></li>
<li><p><strong>ignore_extra</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently ignore parameters from the file that are not
present in this dict.</p></li>
<li><p><strong>cast_dtype</strong> (<em>bool</em><em>, </em><em>default False</em>) – Cast the data type of the NDArray loaded from the checkpoint to the dtype
provided by the Parameter if any</p></li>
<li><p><strong>dtype_source</strong> (<em>str</em><em>, </em><em>default 'current'</em>) – must be in {‘current’, ‘saved’}
Only valid if cast_dtype=True, specify the source of the dtype for casting
the parameters</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SquaredHingeLoss.load_parameters">
<code class="sig-name descname">load_parameters</code><span class="sig-paren">(</span><em class="sig-param">filename</em>, <em class="sig-param">device=None</em>, <em class="sig-param">allow_missing=False</em>, <em class="sig-param">ignore_extra=False</em>, <em class="sig-param">cast_dtype=False</em>, <em class="sig-param">dtype_source='current'</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SquaredHingeLoss.load_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Load parameters from file previously saved by <cite>save_parameters</cite>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>filename</strong> (<em>str</em>) – Path to parameter file.</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em> or </em><em>list of Device</em><em>, </em><em>default cpu</em><em>(</em><em>)</em>) – Device(s) to initialize loaded parameters on.</p></li>
<li><p><strong>allow_missing</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently skip loading parameters not represents in the file.</p></li>
<li><p><strong>ignore_extra</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently ignore parameters from the file that are not
present in this Block.</p></li>
<li><p><strong>cast_dtype</strong> (<em>bool</em><em>, </em><em>default False</em>) – Cast the data type of the NDArray loaded from the checkpoint to the dtype
provided by the Parameter if any.</p></li>
<li><p><strong>dtype_source</strong> (<em>str</em><em>, </em><em>default 'current'</em>) – must be in {‘current’, ‘saved’}
Only valid if cast_dtype=True, specify the source of the dtype for casting
the parameters</p></li>
</ul>
</dd>
</dl>
<p class="rubric">References</p>
<p><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SquaredHingeLoss.optimize_for">
<code class="sig-name descname">optimize_for</code><span class="sig-paren">(</span><em class="sig-param">x</em>, <em class="sig-param">*args</em>, <em class="sig-param">backend=None</em>, <em class="sig-param">clear=False</em>, <em class="sig-param">partition_if_dynamic=True</em>, <em class="sig-param">static_alloc=False</em>, <em class="sig-param">static_shape=False</em>, <em class="sig-param">inline_limit=2</em>, <em class="sig-param">forward_bulk_size=None</em>, <em class="sig-param">backward_bulk_size=None</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SquaredHingeLoss.optimize_for" title="Permalink to this definition"></a></dt>
<dd><p>Partitions the current HybridBlock and optimizes it for a given backend
without executing a forward pass. Modifies the HybridBlock in-place.</p>
<p>Immediately partitions a HybridBlock using the specified backend. Combines
the work done in the hybridize API with part of the work done in the forward
pass without calling the CachedOp. Can be used in place of hybridize,
afterwards <cite>export</cite> can be called or inference can be run. See README.md in
example/extensions/lib_subgraph/README.md for more details.</p>
<p class="rubric">Examples</p>
<p># partition and then export to file
block.optimize_for(x, backend=’myPart’)
block.export(‘partitioned’)</p>
<p># partition and then run inference
block.optimize_for(x, backend=’myPart’)
block(x)</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>x</strong> (<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – first input to model</p></li>
<li><p><strong>*args</strong> (<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – other inputs to model</p></li>
<li><p><strong>backend</strong> (<em>str</em>) – The name of backend, as registered in <cite>SubgraphBackendRegistry</cite>, default None</p></li>
<li><p><strong>backend_opts</strong> (<em>dict of user-specified options to pass to the backend for partitioning</em><em>, </em><em>optional</em>) – Passed on to <cite>PrePartition</cite> and <cite>PostPartition</cite> functions of <cite>SubgraphProperty</cite></p></li>
<li><p><strong>clear</strong> (<em>bool</em><em>, </em><em>default False</em>) – clears any previous optimizations</p></li>
<li><p><strong>partition_if_dynamic</strong> (<em>bool</em><em>, </em><em>default False</em>) – whether to partition the graph when dynamic shape op exists</p></li>
<li><p><strong>static_alloc</strong> (<em>bool</em><em>, </em><em>default False</em>) – Statically allocate memory to improve speed. Memory usage may increase.</p></li>
<li><p><strong>static_shape</strong> (<em>bool</em><em>, </em><em>default False</em>) – Optimize for invariant input shapes between iterations. Must also
set static_alloc to True. Change of input shapes is still allowed
but slower.</p></li>
<li><p><strong>inline_limit</strong> (<em>optional int</em><em>, </em><em>default 2</em>) – Maximum number of operators that can be inlined.</p></li>
<li><p><strong>forward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during forward pass.</p></li>
<li><p><strong>backward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during backward pass.</p></li>
<li><p><strong>**kwargs</strong> (<em>The backend options</em><em>, </em><em>optional</em>) – Passed on to <cite>PrePartition</cite> and <cite>PostPartition</cite> functions of <cite>SubgraphProperty</cite></p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SquaredHingeLoss.params">
<em class="property">property </em><code class="sig-name descname">params</code><a class="headerlink" href="#mxnet.gluon.loss.SquaredHingeLoss.params" title="Permalink to this definition"></a></dt>
<dd><p>Returns this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code>’s parameter dictionary (does not include its
children’s parameters).</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SquaredHingeLoss.register_forward_hook">
<code class="sig-name descname">register_forward_hook</code><span class="sig-paren">(</span><em class="sig-param">hook</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SquaredHingeLoss.register_forward_hook" title="Permalink to this definition"></a></dt>
<dd><p>Registers a forward hook on the block.</p>
<p>The hook function is called immediately after <a class="reference internal" href="#mxnet.gluon.loss.SquaredHingeLoss.forward" title="mxnet.gluon.loss.SquaredHingeLoss.forward"><code class="xref py py-func docutils literal notranslate"><span class="pre">forward()</span></code></a>.
It should not modify the input or output.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>hook</strong> (<em>callable</em>) – The forward hook function of form <cite>hook(block, input, output) -&gt; None</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.utils.HookHandle</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SquaredHingeLoss.register_forward_pre_hook">
<code class="sig-name descname">register_forward_pre_hook</code><span class="sig-paren">(</span><em class="sig-param">hook</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SquaredHingeLoss.register_forward_pre_hook" title="Permalink to this definition"></a></dt>
<dd><p>Registers a forward pre-hook on the block.</p>
<p>The hook function is called immediately before <a class="reference internal" href="#mxnet.gluon.loss.SquaredHingeLoss.forward" title="mxnet.gluon.loss.SquaredHingeLoss.forward"><code class="xref py py-func docutils literal notranslate"><span class="pre">forward()</span></code></a>.
It should not modify the input or output.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>hook</strong> (<em>callable</em>) – The forward hook function of form <cite>hook(block, input) -&gt; None</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.utils.HookHandle</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SquaredHingeLoss.register_op_hook">
<code class="sig-name descname">register_op_hook</code><span class="sig-paren">(</span><em class="sig-param">callback</em>, <em class="sig-param">monitor_all=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SquaredHingeLoss.register_op_hook" title="Permalink to this definition"></a></dt>
<dd><p>Install op hook for block recursively.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>callback</strong> (<em>function</em>) – Function called to inspect the values of the intermediate outputs
of blocks after hybridization. It takes 3 parameters:
name of the tensor being inspected (str)
name of the operator producing or consuming that tensor (str)
tensor being inspected (NDArray).</p></li>
<li><p><strong>monitor_all</strong> (<em>bool</em><em>, </em><em>default False</em>) – If True, monitor both input and output, otherwise monitor output only.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SquaredHingeLoss.reset_ctx">
<code class="sig-name descname">reset_ctx</code><span class="sig-paren">(</span><em class="sig-param">ctx</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SquaredHingeLoss.reset_ctx" title="Permalink to this definition"></a></dt>
<dd><p>This function has been deprecated. Please refer to <code class="docutils literal notranslate"><span class="pre">HybridBlock.reset_device</span></code>.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SquaredHingeLoss.reset_device">
<code class="sig-name descname">reset_device</code><span class="sig-paren">(</span><em class="sig-param">device</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SquaredHingeLoss.reset_device" title="Permalink to this definition"></a></dt>
<dd><p>Re-assign all Parameters to other devices. If the Block is hybridized, it will reset the _cached_op_args.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>device</strong> (Device or list of Device, default <code class="xref py py-meth docutils literal notranslate"><span class="pre">device.current_device()</span></code>.) – Assign Parameter to given device. If device is a list of Device, a
copy will be made for each device.</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SquaredHingeLoss.save">
<code class="sig-name descname">save</code><span class="sig-paren">(</span><em class="sig-param">prefix</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SquaredHingeLoss.save" title="Permalink to this definition"></a></dt>
<dd><p>Save the model architecture and parameters to load again later</p>
<p>Saves the model architecture as a nested dictionary where each Block
in the model is a dictionary and its children are sub-dictionaries.</p>
<p>Each Block is uniquely identified by Block class name and a unique ID.
We save each Block’s parameter UUID to restore later in order to match
the saved parameters.</p>
<p>Recursively traverses a Block’s children in order (since its an
OrderedDict) and uses the unique ID to denote that specific Block.</p>
<p>Assumes that the model is created in an identical order every time.
If the model is not able to be recreated deterministically do not
use this set of APIs to save/load your model.</p>
<p>For HybridBlocks, the cached_graph is saved (Symbol &amp; inputs) if
it has already been hybridized.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>prefix</strong> (<em>str</em>) – The prefix to use in filenames for saving this model:
&lt;prefix&gt;-model.json and &lt;prefix&gt;-model.params</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SquaredHingeLoss.save_parameters">
<code class="sig-name descname">save_parameters</code><span class="sig-paren">(</span><em class="sig-param">filename</em>, <em class="sig-param">deduplicate=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SquaredHingeLoss.save_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Save parameters to file.</p>
<p>Saved parameters can only be loaded with <cite>load_parameters</cite>. Note that this
method only saves parameters, not model structure. If you want to save
model structures, please use <code class="xref py py-meth docutils literal notranslate"><span class="pre">HybridBlock.export()</span></code>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>filename</strong> (<em>str</em>) – Path to file.</p></li>
<li><p><strong>deduplicate</strong> (<em>bool</em><em>, </em><em>default False</em>) – If True, save shared parameters only once. Otherwise, if a Block
contains multiple sub-blocks that share parameters, each of the
shared parameters will be separately saved for every sub-block.</p></li>
</ul>
</dd>
</dl>
<p class="rubric">References</p>
<p><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SquaredHingeLoss.setattr">
<code class="sig-name descname">setattr</code><span class="sig-paren">(</span><em class="sig-param">name</em>, <em class="sig-param">value</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SquaredHingeLoss.setattr" title="Permalink to this definition"></a></dt>
<dd><p>Set an attribute to a new value for all Parameters.</p>
<p>For example, set grad_req to null if you don’t need gradient w.r.t a
model’s Parameters:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">setattr</span><span class="p">(</span><span class="s1">&#39;grad_req&#39;</span><span class="p">,</span> <span class="s1">&#39;null&#39;</span><span class="p">)</span>
</pre></div>
</div>
<p>or change the learning rate multiplier:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">setattr</span><span class="p">(</span><span class="s1">&#39;lr_mult&#39;</span><span class="p">,</span> <span class="mf">0.5</span><span class="p">)</span>
</pre></div>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>name</strong> (<em>str</em>) – Name of the attribute.</p></li>
<li><p><strong>value</strong> (<em>valid type for attribute name</em>) – The new value for the attribute.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SquaredHingeLoss.share_parameters">
<code class="sig-name descname">share_parameters</code><span class="sig-paren">(</span><em class="sig-param">shared</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SquaredHingeLoss.share_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Share parameters recursively inside the model.</p>
<p>For example, if you want <code class="docutils literal notranslate"><span class="pre">dense1</span></code> to share <code class="docutils literal notranslate"><span class="pre">dense0</span></code>’s weights, you can do:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">dense0</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">)</span>
<span class="n">dense1</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">)</span>
<span class="n">dense1</span><span class="o">.</span><span class="n">share_parameters</span><span class="p">(</span><span class="n">dense0</span><span class="o">.</span><span class="n">collect_params</span><span class="p">())</span>
</pre></div>
</div>
<dl class="simple">
<dt>which equals to</dt><dd><p>dense1.weight = dense0.weight
dense1.bias = dense0.bias</p>
</dd>
</dl>
<p>Note that unlike the <cite>load_parameters</cite> or <cite>load_dict</cite> functions,
<cite>share_parameters</cite> results in the <cite>Parameter</cite> object being shared (or
tied) between the models, whereas <cite>load_parameters</cite> or <cite>load_dict</cite> only
set the value of the data dictionary of a model. If you call
<cite>load_parameters</cite> or <cite>load_dict</cite> after <cite>share_parameters</cite>, the loaded
value will be reflected in all networks that use the shared (or tied)
<cite>Parameter</cite> object.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>shared</strong> (<em>Dict</em>) – Dict of the shared parameters.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>this block</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SquaredHingeLoss.summary">
<code class="sig-name descname">summary</code><span class="sig-paren">(</span><em class="sig-param">*inputs</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SquaredHingeLoss.summary" title="Permalink to this definition"></a></dt>
<dd><p>Print the summary of the model’s output and parameters.</p>
<p>The network must have been initialized, and must not have been hybridized.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>inputs</strong> (<em>object</em>) – Any input that the model supports. For any tensor in the input, only
<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.ndarray.NDArray</span></code></a> is supported.</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SquaredHingeLoss.zero_grad">
<code class="sig-name descname">zero_grad</code><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SquaredHingeLoss.zero_grad" title="Permalink to this definition"></a></dt>
<dd><p>Sets all Parameters’ gradient buffer to 0.</p>
</dd></dl>
</dd></dl>
<dl class="class">
<dt id="mxnet.gluon.loss.LogisticLoss">
<em class="property">class </em><code class="sig-name descname">LogisticLoss</code><span class="sig-paren">(</span><em class="sig-param">weight=None</em>, <em class="sig-param">batch_axis=0</em>, <em class="sig-param">label_format='signed'</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#LogisticLoss"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.LogisticLoss" title="Permalink to this definition"></a></dt>
<dd><p>Bases: <a class="reference internal" href="#mxnet.gluon.loss.Loss" title="mxnet.gluon.loss.Loss"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.loss.Loss</span></code></a></p>
<p>Calculates the logistic loss (for binary losses only):</p>
<div class="math notranslate nohighlight">
\[L = \sum_i \log(1 + \exp(- {pred}_i \cdot {label}_i))\]</div>
<p><strong>Methods</strong></p>
<table class="longtable docutils align-default">
<colgroup>
<col style="width: 10%" />
<col style="width: 90%" />
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.LogisticLoss.apply" title="mxnet.gluon.loss.LogisticLoss.apply"><code class="xref py py-obj docutils literal notranslate"><span class="pre">apply</span></code></a>(fn)</p></td>
<td><p>Applies <code class="docutils literal notranslate"><span class="pre">fn</span></code> recursively to every child block as well as self.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.LogisticLoss.collect_params" title="mxnet.gluon.loss.LogisticLoss.collect_params"><code class="xref py py-obj docutils literal notranslate"><span class="pre">collect_params</span></code></a>([select])</p></td>
<td><p>Returns a <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> containing this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and all of its children’s Parameters(default), also can returns the select <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> which match some given regular expressions.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.LogisticLoss.export" title="mxnet.gluon.loss.LogisticLoss.export"><code class="xref py py-obj docutils literal notranslate"><span class="pre">export</span></code></a>(path[, epoch, remove_amp_cast])</p></td>
<td><p>Export HybridBlock to json format that can be loaded by <cite>gluon.SymbolBlock.imports</cite> or the C++ interface.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.LogisticLoss.forward" title="mxnet.gluon.loss.LogisticLoss.forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">forward</span></code></a>(pred, label[, sample_weight])</p></td>
<td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.LogisticLoss.hybridize" title="mxnet.gluon.loss.LogisticLoss.hybridize"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybridize</span></code></a>([active, partition_if_dynamic, …])</p></td>
<td><p>Activates or deactivates <code class="xref py py-class docutils literal notranslate"><span class="pre">HybridBlock</span></code> s recursively.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.LogisticLoss.infer_shape" title="mxnet.gluon.loss.LogisticLoss.infer_shape"><code class="xref py py-obj docutils literal notranslate"><span class="pre">infer_shape</span></code></a>(*args)</p></td>
<td><p>Infers shape of Parameters from inputs.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.LogisticLoss.infer_type" title="mxnet.gluon.loss.LogisticLoss.infer_type"><code class="xref py py-obj docutils literal notranslate"><span class="pre">infer_type</span></code></a>(*args)</p></td>
<td><p>Infers data type of Parameters from inputs.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.LogisticLoss.initialize" title="mxnet.gluon.loss.LogisticLoss.initialize"><code class="xref py py-obj docutils literal notranslate"><span class="pre">initialize</span></code></a>([init, device, verbose, force_reinit])</p></td>
<td><p>Initializes <code class="xref py py-class docutils literal notranslate"><span class="pre">Parameter</span></code> s of this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and its children.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.LogisticLoss.load" title="mxnet.gluon.loss.LogisticLoss.load"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load</span></code></a>(prefix)</p></td>
<td><p>Load a model saved using the <cite>save</cite> API</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.LogisticLoss.load_dict" title="mxnet.gluon.loss.LogisticLoss.load_dict"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load_dict</span></code></a>(param_dict[, device, …])</p></td>
<td><p>Load parameters from dict</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.LogisticLoss.load_parameters" title="mxnet.gluon.loss.LogisticLoss.load_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load_parameters</span></code></a>(filename[, device, …])</p></td>
<td><p>Load parameters from file previously saved by <cite>save_parameters</cite>.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.LogisticLoss.optimize_for" title="mxnet.gluon.loss.LogisticLoss.optimize_for"><code class="xref py py-obj docutils literal notranslate"><span class="pre">optimize_for</span></code></a>(x, *args[, backend, clear, …])</p></td>
<td><p>Partitions the current HybridBlock and optimizes it for a given backend without executing a forward pass.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.LogisticLoss.register_forward_hook" title="mxnet.gluon.loss.LogisticLoss.register_forward_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_forward_hook</span></code></a>(hook)</p></td>
<td><p>Registers a forward hook on the block.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.LogisticLoss.register_forward_pre_hook" title="mxnet.gluon.loss.LogisticLoss.register_forward_pre_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_forward_pre_hook</span></code></a>(hook)</p></td>
<td><p>Registers a forward pre-hook on the block.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.LogisticLoss.register_op_hook" title="mxnet.gluon.loss.LogisticLoss.register_op_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_op_hook</span></code></a>(callback[, monitor_all])</p></td>
<td><p>Install op hook for block recursively.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.LogisticLoss.reset_ctx" title="mxnet.gluon.loss.LogisticLoss.reset_ctx"><code class="xref py py-obj docutils literal notranslate"><span class="pre">reset_ctx</span></code></a>(ctx)</p></td>
<td><p>This function has been deprecated.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.LogisticLoss.reset_device" title="mxnet.gluon.loss.LogisticLoss.reset_device"><code class="xref py py-obj docutils literal notranslate"><span class="pre">reset_device</span></code></a>(device)</p></td>
<td><p>Re-assign all Parameters to other devices.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.LogisticLoss.save" title="mxnet.gluon.loss.LogisticLoss.save"><code class="xref py py-obj docutils literal notranslate"><span class="pre">save</span></code></a>(prefix)</p></td>
<td><p>Save the model architecture and parameters to load again later</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.LogisticLoss.save_parameters" title="mxnet.gluon.loss.LogisticLoss.save_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">save_parameters</span></code></a>(filename[, deduplicate])</p></td>
<td><p>Save parameters to file.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.LogisticLoss.setattr" title="mxnet.gluon.loss.LogisticLoss.setattr"><code class="xref py py-obj docutils literal notranslate"><span class="pre">setattr</span></code></a>(name, value)</p></td>
<td><p>Set an attribute to a new value for all Parameters.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.LogisticLoss.share_parameters" title="mxnet.gluon.loss.LogisticLoss.share_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">share_parameters</span></code></a>(shared)</p></td>
<td><p>Share parameters recursively inside the model.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.LogisticLoss.summary" title="mxnet.gluon.loss.LogisticLoss.summary"><code class="xref py py-obj docutils literal notranslate"><span class="pre">summary</span></code></a>(*inputs)</p></td>
<td><p>Print the summary of the model’s output and parameters.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.LogisticLoss.zero_grad" title="mxnet.gluon.loss.LogisticLoss.zero_grad"><code class="xref py py-obj docutils literal notranslate"><span class="pre">zero_grad</span></code></a>()</p></td>
<td><p>Sets all Parameters’ gradient buffer to 0.</p></td>
</tr>
</tbody>
</table>
<p><strong>Attributes</strong></p>
<table class="longtable docutils align-default">
<colgroup>
<col style="width: 10%" />
<col style="width: 90%" />
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.LogisticLoss.params" title="mxnet.gluon.loss.LogisticLoss.params"><code class="xref py py-obj docutils literal notranslate"><span class="pre">params</span></code></a></p></td>
<td><p>Returns this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code>’s parameter dictionary (does not include its children’s parameters).</p></td>
</tr>
</tbody>
</table>
<p>where <cite>pred</cite> is the classifier prediction and <cite>label</cite> is the target tensor
containing values -1 or 1 (0 or 1 if <cite>label_format</cite> is binary).
<cite>label</cite> and <cite>pred</cite> can have arbitrary shape as long as they have the same number of elements.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>weight</strong> (<em>float</em><em> or </em><em>None</em>) – Global scalar weight for loss.</p></li>
<li><p><strong>batch_axis</strong> (<em>int</em><em>, </em><em>default 0</em>) – The axis that represents mini-batch.</p></li>
<li><p><strong>label_format</strong> (<em>str</em><em>, </em><em>default 'signed'</em>) – Can be either ‘signed’ or ‘binary’. If the label_format is ‘signed’, all label values should
be either -1 or 1. If the label_format is ‘binary’, all label values should be either
0 or 1.</p></li>
<li><p><strong>Inputs</strong><ul>
<li><p><strong>pred</strong>: prediction tensor with arbitrary shape.</p></li>
<li><p><strong>label</strong>: truth tensor with values -1/1 (label_format is ‘signed’)
or 0/1 (label_format is ‘binary’). Must have the same size as pred.</p></li>
<li><p><strong>sample_weight</strong>: element-wise weighting tensor. Must be broadcastable
to the same shape as pred. For example, if pred has shape (64, 10)
and you want to weigh each sample in the batch separately,
sample_weight should have shape (64, 1).</p></li>
</ul>
</p></li>
<li><p><strong>Outputs</strong><ul>
<li><p><strong>loss</strong>: loss tensor with shape (batch_size,). Dimenions other than
batch_axis are averaged out.</p></li>
</ul>
</p></li>
</ul>
</dd>
</dl>
<dl class="method">
<dt id="mxnet.gluon.loss.LogisticLoss.apply">
<code class="sig-name descname">apply</code><span class="sig-paren">(</span><em class="sig-param">fn</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.LogisticLoss.apply" title="Permalink to this definition"></a></dt>
<dd><p>Applies <code class="docutils literal notranslate"><span class="pre">fn</span></code> recursively to every child block as well as self.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>fn</strong> (<em>callable</em>) – Function to be applied to each submodule, of form <cite>fn(block)</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>this block</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.LogisticLoss.collect_params">
<code class="sig-name descname">collect_params</code><span class="sig-paren">(</span><em class="sig-param">select=None</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.LogisticLoss.collect_params" title="Permalink to this definition"></a></dt>
<dd><p>Returns a <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> containing this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and all of its
children’s Parameters(default), also can returns the select <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code>
which match some given regular expressions.</p>
<p>For example, collect the specified parameters in [‘conv1.weight’, ‘conv1.bias’, ‘fc.weight’,
‘fc.bias’]:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">collect_params</span><span class="p">(</span><span class="s1">&#39;conv1.weight|conv1.bias|fc.weight|fc.bias&#39;</span><span class="p">)</span>
</pre></div>
</div>
<p>or collect all parameters whose names end with ‘weight’ or ‘bias’, this can be done
using regular expressions:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">collect_params</span><span class="p">(</span><span class="s1">&#39;.*weight|.*bias&#39;</span><span class="p">)</span>
</pre></div>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>select</strong> (<em>str</em>) – regular expressions</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>The selected <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.LogisticLoss.export">
<code class="sig-name descname">export</code><span class="sig-paren">(</span><em class="sig-param">path</em>, <em class="sig-param">epoch=0</em>, <em class="sig-param">remove_amp_cast=True</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.LogisticLoss.export" title="Permalink to this definition"></a></dt>
<dd><p>Export HybridBlock to json format that can be loaded by
<cite>gluon.SymbolBlock.imports</cite> or the C++ interface.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>When there are only one input, it will have name <cite>data</cite>. When there
Are more than one inputs, they will be named as <cite>data0</cite>, <cite>data1</cite>, etc.</p>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>path</strong> (<em>str</em><em> or </em><em>None</em>) – Path to save model. Two files <cite>path-symbol.json</cite> and <cite>path-xxxx.params</cite>
will be created, where xxxx is the 4 digits epoch number.
If None, do not export to file but return Python Symbol object and
corresponding dictionary of parameters.</p></li>
<li><p><strong>epoch</strong> (<em>int</em>) – Epoch number of saved model.</p></li>
<li><p><strong>remove_amp_cast</strong> (<em>bool</em><em>, </em><em>optional</em>) – Whether to remove the amp_cast and amp_multicast operators, before saving the model.</p></li>
</ul>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p><ul class="simple">
<li><p><strong>symbol_filename</strong> (<em>str</em>) – Filename to which model symbols were saved, including <cite>path</cite> prefix.</p></li>
<li><p><strong>params_filename</strong> (<em>str</em>) – Filename to which model parameters were saved, including <cite>path</cite> prefix.</p></li>
</ul>
</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.LogisticLoss.forward">
<code class="sig-name descname">forward</code><span class="sig-paren">(</span><em class="sig-param">pred</em>, <em class="sig-param">label</em>, <em class="sig-param">sample_weight=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#LogisticLoss.forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.LogisticLoss.forward" title="Permalink to this definition"></a></dt>
<dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>x</strong> (<a class="reference internal" href="../../legacy/symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li>
<li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.LogisticLoss.hybridize">
<code class="sig-name descname">hybridize</code><span class="sig-paren">(</span><em class="sig-param">active=True</em>, <em class="sig-param">partition_if_dynamic=True</em>, <em class="sig-param">static_alloc=False</em>, <em class="sig-param">static_shape=False</em>, <em class="sig-param">inline_limit=2</em>, <em class="sig-param">forward_bulk_size=None</em>, <em class="sig-param">backward_bulk_size=None</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.LogisticLoss.hybridize" title="Permalink to this definition"></a></dt>
<dd><p>Activates or deactivates <code class="xref py py-class docutils literal notranslate"><span class="pre">HybridBlock</span></code> s recursively. Has no effect on
non-hybrid children.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>active</strong> (<em>bool</em><em>, </em><em>default True</em>) – Whether to turn hybrid on or off.</p></li>
<li><p><strong>partition_if_dynamic</strong> (<em>bool</em><em>, </em><em>default False</em>) – whether to partition the graph when dynamic shape op exists</p></li>
<li><p><strong>static_alloc</strong> (<em>bool</em><em>, </em><em>default False</em>) – Statically allocate memory to improve speed. Memory usage may increase.</p></li>
<li><p><strong>static_shape</strong> (<em>bool</em><em>, </em><em>default False</em>) – Optimize for invariant input shapes between iterations. Must also
set static_alloc to True. Change of input shapes is still allowed
but slower.</p></li>
<li><p><strong>inline_limit</strong> (<em>optional int</em><em>, </em><em>default 2</em>) – Maximum number of operators that can be inlined.</p></li>
<li><p><strong>forward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during forward pass.</p></li>
<li><p><strong>backward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during backward pass.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.LogisticLoss.infer_shape">
<code class="sig-name descname">infer_shape</code><span class="sig-paren">(</span><em class="sig-param">*args</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.LogisticLoss.infer_shape" title="Permalink to this definition"></a></dt>
<dd><p>Infers shape of Parameters from inputs.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.LogisticLoss.infer_type">
<code class="sig-name descname">infer_type</code><span class="sig-paren">(</span><em class="sig-param">*args</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.LogisticLoss.infer_type" title="Permalink to this definition"></a></dt>
<dd><p>Infers data type of Parameters from inputs.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.LogisticLoss.initialize">
<code class="sig-name descname">initialize</code><span class="sig-paren">(</span><em class="sig-param">init=&lt;mxnet.initializer.Uniform object&gt;</em>, <em class="sig-param">device=None</em>, <em class="sig-param">verbose=False</em>, <em class="sig-param">force_reinit=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.LogisticLoss.initialize" title="Permalink to this definition"></a></dt>
<dd><p>Initializes <code class="xref py py-class docutils literal notranslate"><span class="pre">Parameter</span></code> s of this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and its children.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>init</strong> (<a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Global default Initializer to be used when <code class="xref py py-meth docutils literal notranslate"><span class="pre">Parameter.init()</span></code> is <code class="docutils literal notranslate"><span class="pre">None</span></code>.
Otherwise, <code class="xref py py-meth docutils literal notranslate"><span class="pre">Parameter.init()</span></code> takes precedence.</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em> or </em><em>list of Device</em>) – Keeps a copy of Parameters on one or many device(s).</p></li>
<li><p><strong>verbose</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to verbosely print out details on initialization.</p></li>
<li><p><strong>force_reinit</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to force re-initialization if parameter is already initialized.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.LogisticLoss.load">
<code class="sig-name descname">load</code><span class="sig-paren">(</span><em class="sig-param">prefix</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.LogisticLoss.load" title="Permalink to this definition"></a></dt>
<dd><p>Load a model saved using the <cite>save</cite> API</p>
<p>Reconfigures a model using the saved configuration. This function
does not regenerate the model architecture. It resets each Block’s
parameter UUIDs as they were when saved in order to match the names of the
saved parameters.</p>
<p>This function assumes the Blocks in the model were created in the same
order they were when the model was saved. This is because each Block is
uniquely identified by Block class name and a unique ID in order (since
its an OrderedDict) and uses the unique ID to denote that specific Block.</p>
<p>Assumes that the model is created in an identical order every time.
If the model is not able to be recreated deterministically do not
use this set of APIs to save/load your model.</p>
<p>For HybridBlocks, the cached_graph (Symbol &amp; inputs) and settings are
restored if it had been hybridized before saving.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>prefix</strong> (<em>str</em>) – The prefix to use in filenames for loading this model:
&lt;prefix&gt;-model.json and &lt;prefix&gt;-model.params</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.LogisticLoss.load_dict">
<code class="sig-name descname">load_dict</code><span class="sig-paren">(</span><em class="sig-param">param_dict</em>, <em class="sig-param">device=None</em>, <em class="sig-param">allow_missing=False</em>, <em class="sig-param">ignore_extra=False</em>, <em class="sig-param">cast_dtype=False</em>, <em class="sig-param">dtype_source='current'</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.LogisticLoss.load_dict" title="Permalink to this definition"></a></dt>
<dd><p>Load parameters from dict</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>param_dict</strong> (<em>dict</em>) – Dictionary containing model parameters</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em>, </em><em>optional</em>) – Device context on which the memory is allocated. Default is
<cite>mxnet.device.current_device()</cite>.</p></li>
<li><p><strong>allow_missing</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently skip loading parameters not represented in the file.</p></li>
<li><p><strong>ignore_extra</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently ignore parameters from the file that are not
present in this dict.</p></li>
<li><p><strong>cast_dtype</strong> (<em>bool</em><em>, </em><em>default False</em>) – Cast the data type of the NDArray loaded from the checkpoint to the dtype
provided by the Parameter if any</p></li>
<li><p><strong>dtype_source</strong> (<em>str</em><em>, </em><em>default 'current'</em>) – must be in {‘current’, ‘saved’}
Only valid if cast_dtype=True, specify the source of the dtype for casting
the parameters</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.LogisticLoss.load_parameters">
<code class="sig-name descname">load_parameters</code><span class="sig-paren">(</span><em class="sig-param">filename</em>, <em class="sig-param">device=None</em>, <em class="sig-param">allow_missing=False</em>, <em class="sig-param">ignore_extra=False</em>, <em class="sig-param">cast_dtype=False</em>, <em class="sig-param">dtype_source='current'</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.LogisticLoss.load_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Load parameters from file previously saved by <cite>save_parameters</cite>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>filename</strong> (<em>str</em>) – Path to parameter file.</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em> or </em><em>list of Device</em><em>, </em><em>default cpu</em><em>(</em><em>)</em>) – Device(s) to initialize loaded parameters on.</p></li>
<li><p><strong>allow_missing</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently skip loading parameters not represents in the file.</p></li>
<li><p><strong>ignore_extra</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently ignore parameters from the file that are not
present in this Block.</p></li>
<li><p><strong>cast_dtype</strong> (<em>bool</em><em>, </em><em>default False</em>) – Cast the data type of the NDArray loaded from the checkpoint to the dtype
provided by the Parameter if any.</p></li>
<li><p><strong>dtype_source</strong> (<em>str</em><em>, </em><em>default 'current'</em>) – must be in {‘current’, ‘saved’}
Only valid if cast_dtype=True, specify the source of the dtype for casting
the parameters</p></li>
</ul>
</dd>
</dl>
<p class="rubric">References</p>
<p><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.LogisticLoss.optimize_for">
<code class="sig-name descname">optimize_for</code><span class="sig-paren">(</span><em class="sig-param">x</em>, <em class="sig-param">*args</em>, <em class="sig-param">backend=None</em>, <em class="sig-param">clear=False</em>, <em class="sig-param">partition_if_dynamic=True</em>, <em class="sig-param">static_alloc=False</em>, <em class="sig-param">static_shape=False</em>, <em class="sig-param">inline_limit=2</em>, <em class="sig-param">forward_bulk_size=None</em>, <em class="sig-param">backward_bulk_size=None</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.LogisticLoss.optimize_for" title="Permalink to this definition"></a></dt>
<dd><p>Partitions the current HybridBlock and optimizes it for a given backend
without executing a forward pass. Modifies the HybridBlock in-place.</p>
<p>Immediately partitions a HybridBlock using the specified backend. Combines
the work done in the hybridize API with part of the work done in the forward
pass without calling the CachedOp. Can be used in place of hybridize,
afterwards <cite>export</cite> can be called or inference can be run. See README.md in
example/extensions/lib_subgraph/README.md for more details.</p>
<p class="rubric">Examples</p>
<p># partition and then export to file
block.optimize_for(x, backend=’myPart’)
block.export(‘partitioned’)</p>
<p># partition and then run inference
block.optimize_for(x, backend=’myPart’)
block(x)</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>x</strong> (<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – first input to model</p></li>
<li><p><strong>*args</strong> (<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – other inputs to model</p></li>
<li><p><strong>backend</strong> (<em>str</em>) – The name of backend, as registered in <cite>SubgraphBackendRegistry</cite>, default None</p></li>
<li><p><strong>backend_opts</strong> (<em>dict of user-specified options to pass to the backend for partitioning</em><em>, </em><em>optional</em>) – Passed on to <cite>PrePartition</cite> and <cite>PostPartition</cite> functions of <cite>SubgraphProperty</cite></p></li>
<li><p><strong>clear</strong> (<em>bool</em><em>, </em><em>default False</em>) – clears any previous optimizations</p></li>
<li><p><strong>partition_if_dynamic</strong> (<em>bool</em><em>, </em><em>default False</em>) – whether to partition the graph when dynamic shape op exists</p></li>
<li><p><strong>static_alloc</strong> (<em>bool</em><em>, </em><em>default False</em>) – Statically allocate memory to improve speed. Memory usage may increase.</p></li>
<li><p><strong>static_shape</strong> (<em>bool</em><em>, </em><em>default False</em>) – Optimize for invariant input shapes between iterations. Must also
set static_alloc to True. Change of input shapes is still allowed
but slower.</p></li>
<li><p><strong>inline_limit</strong> (<em>optional int</em><em>, </em><em>default 2</em>) – Maximum number of operators that can be inlined.</p></li>
<li><p><strong>forward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during forward pass.</p></li>
<li><p><strong>backward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during backward pass.</p></li>
<li><p><strong>**kwargs</strong> (<em>The backend options</em><em>, </em><em>optional</em>) – Passed on to <cite>PrePartition</cite> and <cite>PostPartition</cite> functions of <cite>SubgraphProperty</cite></p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.LogisticLoss.params">
<em class="property">property </em><code class="sig-name descname">params</code><a class="headerlink" href="#mxnet.gluon.loss.LogisticLoss.params" title="Permalink to this definition"></a></dt>
<dd><p>Returns this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code>’s parameter dictionary (does not include its
children’s parameters).</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.LogisticLoss.register_forward_hook">
<code class="sig-name descname">register_forward_hook</code><span class="sig-paren">(</span><em class="sig-param">hook</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.LogisticLoss.register_forward_hook" title="Permalink to this definition"></a></dt>
<dd><p>Registers a forward hook on the block.</p>
<p>The hook function is called immediately after <a class="reference internal" href="#mxnet.gluon.loss.LogisticLoss.forward" title="mxnet.gluon.loss.LogisticLoss.forward"><code class="xref py py-func docutils literal notranslate"><span class="pre">forward()</span></code></a>.
It should not modify the input or output.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>hook</strong> (<em>callable</em>) – The forward hook function of form <cite>hook(block, input, output) -&gt; None</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.utils.HookHandle</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.LogisticLoss.register_forward_pre_hook">
<code class="sig-name descname">register_forward_pre_hook</code><span class="sig-paren">(</span><em class="sig-param">hook</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.LogisticLoss.register_forward_pre_hook" title="Permalink to this definition"></a></dt>
<dd><p>Registers a forward pre-hook on the block.</p>
<p>The hook function is called immediately before <a class="reference internal" href="#mxnet.gluon.loss.LogisticLoss.forward" title="mxnet.gluon.loss.LogisticLoss.forward"><code class="xref py py-func docutils literal notranslate"><span class="pre">forward()</span></code></a>.
It should not modify the input or output.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>hook</strong> (<em>callable</em>) – The forward hook function of form <cite>hook(block, input) -&gt; None</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.utils.HookHandle</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.LogisticLoss.register_op_hook">
<code class="sig-name descname">register_op_hook</code><span class="sig-paren">(</span><em class="sig-param">callback</em>, <em class="sig-param">monitor_all=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.LogisticLoss.register_op_hook" title="Permalink to this definition"></a></dt>
<dd><p>Install op hook for block recursively.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>callback</strong> (<em>function</em>) – Function called to inspect the values of the intermediate outputs
of blocks after hybridization. It takes 3 parameters:
name of the tensor being inspected (str)
name of the operator producing or consuming that tensor (str)
tensor being inspected (NDArray).</p></li>
<li><p><strong>monitor_all</strong> (<em>bool</em><em>, </em><em>default False</em>) – If True, monitor both input and output, otherwise monitor output only.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.LogisticLoss.reset_ctx">
<code class="sig-name descname">reset_ctx</code><span class="sig-paren">(</span><em class="sig-param">ctx</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.LogisticLoss.reset_ctx" title="Permalink to this definition"></a></dt>
<dd><p>This function has been deprecated. Please refer to <code class="docutils literal notranslate"><span class="pre">HybridBlock.reset_device</span></code>.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.LogisticLoss.reset_device">
<code class="sig-name descname">reset_device</code><span class="sig-paren">(</span><em class="sig-param">device</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.LogisticLoss.reset_device" title="Permalink to this definition"></a></dt>
<dd><p>Re-assign all Parameters to other devices. If the Block is hybridized, it will reset the _cached_op_args.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>device</strong> (Device or list of Device, default <code class="xref py py-meth docutils literal notranslate"><span class="pre">device.current_device()</span></code>.) – Assign Parameter to given device. If device is a list of Device, a
copy will be made for each device.</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.LogisticLoss.save">
<code class="sig-name descname">save</code><span class="sig-paren">(</span><em class="sig-param">prefix</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.LogisticLoss.save" title="Permalink to this definition"></a></dt>
<dd><p>Save the model architecture and parameters to load again later</p>
<p>Saves the model architecture as a nested dictionary where each Block
in the model is a dictionary and its children are sub-dictionaries.</p>
<p>Each Block is uniquely identified by Block class name and a unique ID.
We save each Block’s parameter UUID to restore later in order to match
the saved parameters.</p>
<p>Recursively traverses a Block’s children in order (since its an
OrderedDict) and uses the unique ID to denote that specific Block.</p>
<p>Assumes that the model is created in an identical order every time.
If the model is not able to be recreated deterministically do not
use this set of APIs to save/load your model.</p>
<p>For HybridBlocks, the cached_graph is saved (Symbol &amp; inputs) if
it has already been hybridized.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>prefix</strong> (<em>str</em>) – The prefix to use in filenames for saving this model:
&lt;prefix&gt;-model.json and &lt;prefix&gt;-model.params</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.LogisticLoss.save_parameters">
<code class="sig-name descname">save_parameters</code><span class="sig-paren">(</span><em class="sig-param">filename</em>, <em class="sig-param">deduplicate=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.LogisticLoss.save_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Save parameters to file.</p>
<p>Saved parameters can only be loaded with <cite>load_parameters</cite>. Note that this
method only saves parameters, not model structure. If you want to save
model structures, please use <code class="xref py py-meth docutils literal notranslate"><span class="pre">HybridBlock.export()</span></code>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>filename</strong> (<em>str</em>) – Path to file.</p></li>
<li><p><strong>deduplicate</strong> (<em>bool</em><em>, </em><em>default False</em>) – If True, save shared parameters only once. Otherwise, if a Block
contains multiple sub-blocks that share parameters, each of the
shared parameters will be separately saved for every sub-block.</p></li>
</ul>
</dd>
</dl>
<p class="rubric">References</p>
<p><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.LogisticLoss.setattr">
<code class="sig-name descname">setattr</code><span class="sig-paren">(</span><em class="sig-param">name</em>, <em class="sig-param">value</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.LogisticLoss.setattr" title="Permalink to this definition"></a></dt>
<dd><p>Set an attribute to a new value for all Parameters.</p>
<p>For example, set grad_req to null if you don’t need gradient w.r.t a
model’s Parameters:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">setattr</span><span class="p">(</span><span class="s1">&#39;grad_req&#39;</span><span class="p">,</span> <span class="s1">&#39;null&#39;</span><span class="p">)</span>
</pre></div>
</div>
<p>or change the learning rate multiplier:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">setattr</span><span class="p">(</span><span class="s1">&#39;lr_mult&#39;</span><span class="p">,</span> <span class="mf">0.5</span><span class="p">)</span>
</pre></div>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>name</strong> (<em>str</em>) – Name of the attribute.</p></li>
<li><p><strong>value</strong> (<em>valid type for attribute name</em>) – The new value for the attribute.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.LogisticLoss.share_parameters">
<code class="sig-name descname">share_parameters</code><span class="sig-paren">(</span><em class="sig-param">shared</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.LogisticLoss.share_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Share parameters recursively inside the model.</p>
<p>For example, if you want <code class="docutils literal notranslate"><span class="pre">dense1</span></code> to share <code class="docutils literal notranslate"><span class="pre">dense0</span></code>’s weights, you can do:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">dense0</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">)</span>
<span class="n">dense1</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">)</span>
<span class="n">dense1</span><span class="o">.</span><span class="n">share_parameters</span><span class="p">(</span><span class="n">dense0</span><span class="o">.</span><span class="n">collect_params</span><span class="p">())</span>
</pre></div>
</div>
<dl class="simple">
<dt>which equals to</dt><dd><p>dense1.weight = dense0.weight
dense1.bias = dense0.bias</p>
</dd>
</dl>
<p>Note that unlike the <cite>load_parameters</cite> or <cite>load_dict</cite> functions,
<cite>share_parameters</cite> results in the <cite>Parameter</cite> object being shared (or
tied) between the models, whereas <cite>load_parameters</cite> or <cite>load_dict</cite> only
set the value of the data dictionary of a model. If you call
<cite>load_parameters</cite> or <cite>load_dict</cite> after <cite>share_parameters</cite>, the loaded
value will be reflected in all networks that use the shared (or tied)
<cite>Parameter</cite> object.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>shared</strong> (<em>Dict</em>) – Dict of the shared parameters.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>this block</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.LogisticLoss.summary">
<code class="sig-name descname">summary</code><span class="sig-paren">(</span><em class="sig-param">*inputs</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.LogisticLoss.summary" title="Permalink to this definition"></a></dt>
<dd><p>Print the summary of the model’s output and parameters.</p>
<p>The network must have been initialized, and must not have been hybridized.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>inputs</strong> (<em>object</em>) – Any input that the model supports. For any tensor in the input, only
<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.ndarray.NDArray</span></code></a> is supported.</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.LogisticLoss.zero_grad">
<code class="sig-name descname">zero_grad</code><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.LogisticLoss.zero_grad" title="Permalink to this definition"></a></dt>
<dd><p>Sets all Parameters’ gradient buffer to 0.</p>
</dd></dl>
</dd></dl>
<dl class="class">
<dt id="mxnet.gluon.loss.TripletLoss">
<em class="property">class </em><code class="sig-name descname">TripletLoss</code><span class="sig-paren">(</span><em class="sig-param">margin=1</em>, <em class="sig-param">weight=None</em>, <em class="sig-param">batch_axis=0</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#TripletLoss"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.TripletLoss" title="Permalink to this definition"></a></dt>
<dd><p>Bases: <a class="reference internal" href="#mxnet.gluon.loss.Loss" title="mxnet.gluon.loss.Loss"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.loss.Loss</span></code></a></p>
<p>Calculates triplet loss given three input tensors and a positive margin.
Triplet loss measures the relative similarity between a positive
example, a negative example, and prediction:</p>
<div class="math notranslate nohighlight">
\[L = \sum_i \max(\Vert {pos_i}_i - {pred} \Vert_2^2 -
\Vert {neg_i}_i - {pred} \Vert_2^2 + {margin}, 0)\]</div>
<p><strong>Methods</strong></p>
<table class="longtable docutils align-default">
<colgroup>
<col style="width: 10%" />
<col style="width: 90%" />
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.TripletLoss.apply" title="mxnet.gluon.loss.TripletLoss.apply"><code class="xref py py-obj docutils literal notranslate"><span class="pre">apply</span></code></a>(fn)</p></td>
<td><p>Applies <code class="docutils literal notranslate"><span class="pre">fn</span></code> recursively to every child block as well as self.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.TripletLoss.collect_params" title="mxnet.gluon.loss.TripletLoss.collect_params"><code class="xref py py-obj docutils literal notranslate"><span class="pre">collect_params</span></code></a>([select])</p></td>
<td><p>Returns a <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> containing this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and all of its children’s Parameters(default), also can returns the select <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> which match some given regular expressions.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.TripletLoss.export" title="mxnet.gluon.loss.TripletLoss.export"><code class="xref py py-obj docutils literal notranslate"><span class="pre">export</span></code></a>(path[, epoch, remove_amp_cast])</p></td>
<td><p>Export HybridBlock to json format that can be loaded by <cite>gluon.SymbolBlock.imports</cite> or the C++ interface.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.TripletLoss.forward" title="mxnet.gluon.loss.TripletLoss.forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">forward</span></code></a>(pred, positive, negative[, …])</p></td>
<td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.TripletLoss.hybridize" title="mxnet.gluon.loss.TripletLoss.hybridize"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybridize</span></code></a>([active, partition_if_dynamic, …])</p></td>
<td><p>Activates or deactivates <code class="xref py py-class docutils literal notranslate"><span class="pre">HybridBlock</span></code> s recursively.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.TripletLoss.infer_shape" title="mxnet.gluon.loss.TripletLoss.infer_shape"><code class="xref py py-obj docutils literal notranslate"><span class="pre">infer_shape</span></code></a>(*args)</p></td>
<td><p>Infers shape of Parameters from inputs.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.TripletLoss.infer_type" title="mxnet.gluon.loss.TripletLoss.infer_type"><code class="xref py py-obj docutils literal notranslate"><span class="pre">infer_type</span></code></a>(*args)</p></td>
<td><p>Infers data type of Parameters from inputs.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.TripletLoss.initialize" title="mxnet.gluon.loss.TripletLoss.initialize"><code class="xref py py-obj docutils literal notranslate"><span class="pre">initialize</span></code></a>([init, device, verbose, force_reinit])</p></td>
<td><p>Initializes <code class="xref py py-class docutils literal notranslate"><span class="pre">Parameter</span></code> s of this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and its children.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.TripletLoss.load" title="mxnet.gluon.loss.TripletLoss.load"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load</span></code></a>(prefix)</p></td>
<td><p>Load a model saved using the <cite>save</cite> API</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.TripletLoss.load_dict" title="mxnet.gluon.loss.TripletLoss.load_dict"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load_dict</span></code></a>(param_dict[, device, …])</p></td>
<td><p>Load parameters from dict</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.TripletLoss.load_parameters" title="mxnet.gluon.loss.TripletLoss.load_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load_parameters</span></code></a>(filename[, device, …])</p></td>
<td><p>Load parameters from file previously saved by <cite>save_parameters</cite>.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.TripletLoss.optimize_for" title="mxnet.gluon.loss.TripletLoss.optimize_for"><code class="xref py py-obj docutils literal notranslate"><span class="pre">optimize_for</span></code></a>(x, *args[, backend, clear, …])</p></td>
<td><p>Partitions the current HybridBlock and optimizes it for a given backend without executing a forward pass.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.TripletLoss.register_forward_hook" title="mxnet.gluon.loss.TripletLoss.register_forward_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_forward_hook</span></code></a>(hook)</p></td>
<td><p>Registers a forward hook on the block.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.TripletLoss.register_forward_pre_hook" title="mxnet.gluon.loss.TripletLoss.register_forward_pre_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_forward_pre_hook</span></code></a>(hook)</p></td>
<td><p>Registers a forward pre-hook on the block.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.TripletLoss.register_op_hook" title="mxnet.gluon.loss.TripletLoss.register_op_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_op_hook</span></code></a>(callback[, monitor_all])</p></td>
<td><p>Install op hook for block recursively.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.TripletLoss.reset_ctx" title="mxnet.gluon.loss.TripletLoss.reset_ctx"><code class="xref py py-obj docutils literal notranslate"><span class="pre">reset_ctx</span></code></a>(ctx)</p></td>
<td><p>This function has been deprecated.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.TripletLoss.reset_device" title="mxnet.gluon.loss.TripletLoss.reset_device"><code class="xref py py-obj docutils literal notranslate"><span class="pre">reset_device</span></code></a>(device)</p></td>
<td><p>Re-assign all Parameters to other devices.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.TripletLoss.save" title="mxnet.gluon.loss.TripletLoss.save"><code class="xref py py-obj docutils literal notranslate"><span class="pre">save</span></code></a>(prefix)</p></td>
<td><p>Save the model architecture and parameters to load again later</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.TripletLoss.save_parameters" title="mxnet.gluon.loss.TripletLoss.save_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">save_parameters</span></code></a>(filename[, deduplicate])</p></td>
<td><p>Save parameters to file.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.TripletLoss.setattr" title="mxnet.gluon.loss.TripletLoss.setattr"><code class="xref py py-obj docutils literal notranslate"><span class="pre">setattr</span></code></a>(name, value)</p></td>
<td><p>Set an attribute to a new value for all Parameters.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.TripletLoss.share_parameters" title="mxnet.gluon.loss.TripletLoss.share_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">share_parameters</span></code></a>(shared)</p></td>
<td><p>Share parameters recursively inside the model.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.TripletLoss.summary" title="mxnet.gluon.loss.TripletLoss.summary"><code class="xref py py-obj docutils literal notranslate"><span class="pre">summary</span></code></a>(*inputs)</p></td>
<td><p>Print the summary of the model’s output and parameters.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.TripletLoss.zero_grad" title="mxnet.gluon.loss.TripletLoss.zero_grad"><code class="xref py py-obj docutils literal notranslate"><span class="pre">zero_grad</span></code></a>()</p></td>
<td><p>Sets all Parameters’ gradient buffer to 0.</p></td>
</tr>
</tbody>
</table>
<p><strong>Attributes</strong></p>
<table class="longtable docutils align-default">
<colgroup>
<col style="width: 10%" />
<col style="width: 90%" />
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.TripletLoss.params" title="mxnet.gluon.loss.TripletLoss.params"><code class="xref py py-obj docutils literal notranslate"><span class="pre">params</span></code></a></p></td>
<td><p>Returns this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code>’s parameter dictionary (does not include its children’s parameters).</p></td>
</tr>
</tbody>
</table>
<p><cite>positive</cite>, <cite>negative</cite>, and ‘pred’ can have arbitrary shape as long as they
have the same number of elements.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>margin</strong> (<em>float</em>) – Margin of separation between correct and incorrect pair.</p></li>
<li><p><strong>weight</strong> (<em>float</em><em> or </em><em>None</em>) – Global scalar weight for loss.</p></li>
<li><p><strong>batch_axis</strong> (<em>int</em><em>, </em><em>default 0</em>) – The axis that represents mini-batch.</p></li>
</ul>
</dd>
</dl>
<dl class="simple">
<dt>Inputs:</dt><dd><ul class="simple">
<li><p><strong>pred</strong>: prediction tensor with arbitrary shape</p></li>
<li><p><strong>positive</strong>: positive example tensor with arbitrary shape. Must have
the same size as pred.</p></li>
<li><p><strong>negative</strong>: negative example tensor with arbitrary shape Must have
the same size as pred.</p></li>
</ul>
</dd>
<dt>Outputs:</dt><dd><ul class="simple">
<li><p><strong>loss</strong>: loss tensor with shape (batch_size,).</p></li>
</ul>
</dd>
</dl>
<dl class="method">
<dt id="mxnet.gluon.loss.TripletLoss.apply">
<code class="sig-name descname">apply</code><span class="sig-paren">(</span><em class="sig-param">fn</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.TripletLoss.apply" title="Permalink to this definition"></a></dt>
<dd><p>Applies <code class="docutils literal notranslate"><span class="pre">fn</span></code> recursively to every child block as well as self.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>fn</strong> (<em>callable</em>) – Function to be applied to each submodule, of form <cite>fn(block)</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>this block</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.TripletLoss.collect_params">
<code class="sig-name descname">collect_params</code><span class="sig-paren">(</span><em class="sig-param">select=None</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.TripletLoss.collect_params" title="Permalink to this definition"></a></dt>
<dd><p>Returns a <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> containing this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and all of its
children’s Parameters(default), also can returns the select <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code>
which match some given regular expressions.</p>
<p>For example, collect the specified parameters in [‘conv1.weight’, ‘conv1.bias’, ‘fc.weight’,
‘fc.bias’]:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">collect_params</span><span class="p">(</span><span class="s1">&#39;conv1.weight|conv1.bias|fc.weight|fc.bias&#39;</span><span class="p">)</span>
</pre></div>
</div>
<p>or collect all parameters whose names end with ‘weight’ or ‘bias’, this can be done
using regular expressions:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">collect_params</span><span class="p">(</span><span class="s1">&#39;.*weight|.*bias&#39;</span><span class="p">)</span>
</pre></div>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>select</strong> (<em>str</em>) – regular expressions</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>The selected <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.TripletLoss.export">
<code class="sig-name descname">export</code><span class="sig-paren">(</span><em class="sig-param">path</em>, <em class="sig-param">epoch=0</em>, <em class="sig-param">remove_amp_cast=True</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.TripletLoss.export" title="Permalink to this definition"></a></dt>
<dd><p>Export HybridBlock to json format that can be loaded by
<cite>gluon.SymbolBlock.imports</cite> or the C++ interface.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>When there are only one input, it will have name <cite>data</cite>. When there
Are more than one inputs, they will be named as <cite>data0</cite>, <cite>data1</cite>, etc.</p>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>path</strong> (<em>str</em><em> or </em><em>None</em>) – Path to save model. Two files <cite>path-symbol.json</cite> and <cite>path-xxxx.params</cite>
will be created, where xxxx is the 4 digits epoch number.
If None, do not export to file but return Python Symbol object and
corresponding dictionary of parameters.</p></li>
<li><p><strong>epoch</strong> (<em>int</em>) – Epoch number of saved model.</p></li>
<li><p><strong>remove_amp_cast</strong> (<em>bool</em><em>, </em><em>optional</em>) – Whether to remove the amp_cast and amp_multicast operators, before saving the model.</p></li>
</ul>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p><ul class="simple">
<li><p><strong>symbol_filename</strong> (<em>str</em>) – Filename to which model symbols were saved, including <cite>path</cite> prefix.</p></li>
<li><p><strong>params_filename</strong> (<em>str</em>) – Filename to which model parameters were saved, including <cite>path</cite> prefix.</p></li>
</ul>
</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.TripletLoss.forward">
<code class="sig-name descname">forward</code><span class="sig-paren">(</span><em class="sig-param">pred</em>, <em class="sig-param">positive</em>, <em class="sig-param">negative</em>, <em class="sig-param">sample_weight=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#TripletLoss.forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.TripletLoss.forward" title="Permalink to this definition"></a></dt>
<dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>x</strong> (<a class="reference internal" href="../../legacy/symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li>
<li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.TripletLoss.hybridize">
<code class="sig-name descname">hybridize</code><span class="sig-paren">(</span><em class="sig-param">active=True</em>, <em class="sig-param">partition_if_dynamic=True</em>, <em class="sig-param">static_alloc=False</em>, <em class="sig-param">static_shape=False</em>, <em class="sig-param">inline_limit=2</em>, <em class="sig-param">forward_bulk_size=None</em>, <em class="sig-param">backward_bulk_size=None</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.TripletLoss.hybridize" title="Permalink to this definition"></a></dt>
<dd><p>Activates or deactivates <code class="xref py py-class docutils literal notranslate"><span class="pre">HybridBlock</span></code> s recursively. Has no effect on
non-hybrid children.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>active</strong> (<em>bool</em><em>, </em><em>default True</em>) – Whether to turn hybrid on or off.</p></li>
<li><p><strong>partition_if_dynamic</strong> (<em>bool</em><em>, </em><em>default False</em>) – whether to partition the graph when dynamic shape op exists</p></li>
<li><p><strong>static_alloc</strong> (<em>bool</em><em>, </em><em>default False</em>) – Statically allocate memory to improve speed. Memory usage may increase.</p></li>
<li><p><strong>static_shape</strong> (<em>bool</em><em>, </em><em>default False</em>) – Optimize for invariant input shapes between iterations. Must also
set static_alloc to True. Change of input shapes is still allowed
but slower.</p></li>
<li><p><strong>inline_limit</strong> (<em>optional int</em><em>, </em><em>default 2</em>) – Maximum number of operators that can be inlined.</p></li>
<li><p><strong>forward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during forward pass.</p></li>
<li><p><strong>backward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during backward pass.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.TripletLoss.infer_shape">
<code class="sig-name descname">infer_shape</code><span class="sig-paren">(</span><em class="sig-param">*args</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.TripletLoss.infer_shape" title="Permalink to this definition"></a></dt>
<dd><p>Infers shape of Parameters from inputs.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.TripletLoss.infer_type">
<code class="sig-name descname">infer_type</code><span class="sig-paren">(</span><em class="sig-param">*args</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.TripletLoss.infer_type" title="Permalink to this definition"></a></dt>
<dd><p>Infers data type of Parameters from inputs.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.TripletLoss.initialize">
<code class="sig-name descname">initialize</code><span class="sig-paren">(</span><em class="sig-param">init=&lt;mxnet.initializer.Uniform object&gt;</em>, <em class="sig-param">device=None</em>, <em class="sig-param">verbose=False</em>, <em class="sig-param">force_reinit=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.TripletLoss.initialize" title="Permalink to this definition"></a></dt>
<dd><p>Initializes <code class="xref py py-class docutils literal notranslate"><span class="pre">Parameter</span></code> s of this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and its children.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>init</strong> (<a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Global default Initializer to be used when <code class="xref py py-meth docutils literal notranslate"><span class="pre">Parameter.init()</span></code> is <code class="docutils literal notranslate"><span class="pre">None</span></code>.
Otherwise, <code class="xref py py-meth docutils literal notranslate"><span class="pre">Parameter.init()</span></code> takes precedence.</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em> or </em><em>list of Device</em>) – Keeps a copy of Parameters on one or many device(s).</p></li>
<li><p><strong>verbose</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to verbosely print out details on initialization.</p></li>
<li><p><strong>force_reinit</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to force re-initialization if parameter is already initialized.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.TripletLoss.load">
<code class="sig-name descname">load</code><span class="sig-paren">(</span><em class="sig-param">prefix</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.TripletLoss.load" title="Permalink to this definition"></a></dt>
<dd><p>Load a model saved using the <cite>save</cite> API</p>
<p>Reconfigures a model using the saved configuration. This function
does not regenerate the model architecture. It resets each Block’s
parameter UUIDs as they were when saved in order to match the names of the
saved parameters.</p>
<p>This function assumes the Blocks in the model were created in the same
order they were when the model was saved. This is because each Block is
uniquely identified by Block class name and a unique ID in order (since
its an OrderedDict) and uses the unique ID to denote that specific Block.</p>
<p>Assumes that the model is created in an identical order every time.
If the model is not able to be recreated deterministically do not
use this set of APIs to save/load your model.</p>
<p>For HybridBlocks, the cached_graph (Symbol &amp; inputs) and settings are
restored if it had been hybridized before saving.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>prefix</strong> (<em>str</em>) – The prefix to use in filenames for loading this model:
&lt;prefix&gt;-model.json and &lt;prefix&gt;-model.params</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.TripletLoss.load_dict">
<code class="sig-name descname">load_dict</code><span class="sig-paren">(</span><em class="sig-param">param_dict</em>, <em class="sig-param">device=None</em>, <em class="sig-param">allow_missing=False</em>, <em class="sig-param">ignore_extra=False</em>, <em class="sig-param">cast_dtype=False</em>, <em class="sig-param">dtype_source='current'</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.TripletLoss.load_dict" title="Permalink to this definition"></a></dt>
<dd><p>Load parameters from dict</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>param_dict</strong> (<em>dict</em>) – Dictionary containing model parameters</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em>, </em><em>optional</em>) – Device context on which the memory is allocated. Default is
<cite>mxnet.device.current_device()</cite>.</p></li>
<li><p><strong>allow_missing</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently skip loading parameters not represented in the file.</p></li>
<li><p><strong>ignore_extra</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently ignore parameters from the file that are not
present in this dict.</p></li>
<li><p><strong>cast_dtype</strong> (<em>bool</em><em>, </em><em>default False</em>) – Cast the data type of the NDArray loaded from the checkpoint to the dtype
provided by the Parameter if any</p></li>
<li><p><strong>dtype_source</strong> (<em>str</em><em>, </em><em>default 'current'</em>) – must be in {‘current’, ‘saved’}
Only valid if cast_dtype=True, specify the source of the dtype for casting
the parameters</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.TripletLoss.load_parameters">
<code class="sig-name descname">load_parameters</code><span class="sig-paren">(</span><em class="sig-param">filename</em>, <em class="sig-param">device=None</em>, <em class="sig-param">allow_missing=False</em>, <em class="sig-param">ignore_extra=False</em>, <em class="sig-param">cast_dtype=False</em>, <em class="sig-param">dtype_source='current'</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.TripletLoss.load_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Load parameters from file previously saved by <cite>save_parameters</cite>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>filename</strong> (<em>str</em>) – Path to parameter file.</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em> or </em><em>list of Device</em><em>, </em><em>default cpu</em><em>(</em><em>)</em>) – Device(s) to initialize loaded parameters on.</p></li>
<li><p><strong>allow_missing</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently skip loading parameters not represents in the file.</p></li>
<li><p><strong>ignore_extra</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently ignore parameters from the file that are not
present in this Block.</p></li>
<li><p><strong>cast_dtype</strong> (<em>bool</em><em>, </em><em>default False</em>) – Cast the data type of the NDArray loaded from the checkpoint to the dtype
provided by the Parameter if any.</p></li>
<li><p><strong>dtype_source</strong> (<em>str</em><em>, </em><em>default 'current'</em>) – must be in {‘current’, ‘saved’}
Only valid if cast_dtype=True, specify the source of the dtype for casting
the parameters</p></li>
</ul>
</dd>
</dl>
<p class="rubric">References</p>
<p><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.TripletLoss.optimize_for">
<code class="sig-name descname">optimize_for</code><span class="sig-paren">(</span><em class="sig-param">x</em>, <em class="sig-param">*args</em>, <em class="sig-param">backend=None</em>, <em class="sig-param">clear=False</em>, <em class="sig-param">partition_if_dynamic=True</em>, <em class="sig-param">static_alloc=False</em>, <em class="sig-param">static_shape=False</em>, <em class="sig-param">inline_limit=2</em>, <em class="sig-param">forward_bulk_size=None</em>, <em class="sig-param">backward_bulk_size=None</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.TripletLoss.optimize_for" title="Permalink to this definition"></a></dt>
<dd><p>Partitions the current HybridBlock and optimizes it for a given backend
without executing a forward pass. Modifies the HybridBlock in-place.</p>
<p>Immediately partitions a HybridBlock using the specified backend. Combines
the work done in the hybridize API with part of the work done in the forward
pass without calling the CachedOp. Can be used in place of hybridize,
afterwards <cite>export</cite> can be called or inference can be run. See README.md in
example/extensions/lib_subgraph/README.md for more details.</p>
<p class="rubric">Examples</p>
<p># partition and then export to file
block.optimize_for(x, backend=’myPart’)
block.export(‘partitioned’)</p>
<p># partition and then run inference
block.optimize_for(x, backend=’myPart’)
block(x)</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>x</strong> (<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – first input to model</p></li>
<li><p><strong>*args</strong> (<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – other inputs to model</p></li>
<li><p><strong>backend</strong> (<em>str</em>) – The name of backend, as registered in <cite>SubgraphBackendRegistry</cite>, default None</p></li>
<li><p><strong>backend_opts</strong> (<em>dict of user-specified options to pass to the backend for partitioning</em><em>, </em><em>optional</em>) – Passed on to <cite>PrePartition</cite> and <cite>PostPartition</cite> functions of <cite>SubgraphProperty</cite></p></li>
<li><p><strong>clear</strong> (<em>bool</em><em>, </em><em>default False</em>) – clears any previous optimizations</p></li>
<li><p><strong>partition_if_dynamic</strong> (<em>bool</em><em>, </em><em>default False</em>) – whether to partition the graph when dynamic shape op exists</p></li>
<li><p><strong>static_alloc</strong> (<em>bool</em><em>, </em><em>default False</em>) – Statically allocate memory to improve speed. Memory usage may increase.</p></li>
<li><p><strong>static_shape</strong> (<em>bool</em><em>, </em><em>default False</em>) – Optimize for invariant input shapes between iterations. Must also
set static_alloc to True. Change of input shapes is still allowed
but slower.</p></li>
<li><p><strong>inline_limit</strong> (<em>optional int</em><em>, </em><em>default 2</em>) – Maximum number of operators that can be inlined.</p></li>
<li><p><strong>forward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during forward pass.</p></li>
<li><p><strong>backward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during backward pass.</p></li>
<li><p><strong>**kwargs</strong> (<em>The backend options</em><em>, </em><em>optional</em>) – Passed on to <cite>PrePartition</cite> and <cite>PostPartition</cite> functions of <cite>SubgraphProperty</cite></p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.TripletLoss.params">
<em class="property">property </em><code class="sig-name descname">params</code><a class="headerlink" href="#mxnet.gluon.loss.TripletLoss.params" title="Permalink to this definition"></a></dt>
<dd><p>Returns this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code>’s parameter dictionary (does not include its
children’s parameters).</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.TripletLoss.register_forward_hook">
<code class="sig-name descname">register_forward_hook</code><span class="sig-paren">(</span><em class="sig-param">hook</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.TripletLoss.register_forward_hook" title="Permalink to this definition"></a></dt>
<dd><p>Registers a forward hook on the block.</p>
<p>The hook function is called immediately after <a class="reference internal" href="#mxnet.gluon.loss.TripletLoss.forward" title="mxnet.gluon.loss.TripletLoss.forward"><code class="xref py py-func docutils literal notranslate"><span class="pre">forward()</span></code></a>.
It should not modify the input or output.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>hook</strong> (<em>callable</em>) – The forward hook function of form <cite>hook(block, input, output) -&gt; None</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.utils.HookHandle</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.TripletLoss.register_forward_pre_hook">
<code class="sig-name descname">register_forward_pre_hook</code><span class="sig-paren">(</span><em class="sig-param">hook</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.TripletLoss.register_forward_pre_hook" title="Permalink to this definition"></a></dt>
<dd><p>Registers a forward pre-hook on the block.</p>
<p>The hook function is called immediately before <a class="reference internal" href="#mxnet.gluon.loss.TripletLoss.forward" title="mxnet.gluon.loss.TripletLoss.forward"><code class="xref py py-func docutils literal notranslate"><span class="pre">forward()</span></code></a>.
It should not modify the input or output.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>hook</strong> (<em>callable</em>) – The forward hook function of form <cite>hook(block, input) -&gt; None</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.utils.HookHandle</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.TripletLoss.register_op_hook">
<code class="sig-name descname">register_op_hook</code><span class="sig-paren">(</span><em class="sig-param">callback</em>, <em class="sig-param">monitor_all=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.TripletLoss.register_op_hook" title="Permalink to this definition"></a></dt>
<dd><p>Install op hook for block recursively.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>callback</strong> (<em>function</em>) – Function called to inspect the values of the intermediate outputs
of blocks after hybridization. It takes 3 parameters:
name of the tensor being inspected (str)
name of the operator producing or consuming that tensor (str)
tensor being inspected (NDArray).</p></li>
<li><p><strong>monitor_all</strong> (<em>bool</em><em>, </em><em>default False</em>) – If True, monitor both input and output, otherwise monitor output only.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.TripletLoss.reset_ctx">
<code class="sig-name descname">reset_ctx</code><span class="sig-paren">(</span><em class="sig-param">ctx</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.TripletLoss.reset_ctx" title="Permalink to this definition"></a></dt>
<dd><p>This function has been deprecated. Please refer to <code class="docutils literal notranslate"><span class="pre">HybridBlock.reset_device</span></code>.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.TripletLoss.reset_device">
<code class="sig-name descname">reset_device</code><span class="sig-paren">(</span><em class="sig-param">device</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.TripletLoss.reset_device" title="Permalink to this definition"></a></dt>
<dd><p>Re-assign all Parameters to other devices. If the Block is hybridized, it will reset the _cached_op_args.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>device</strong> (Device or list of Device, default <code class="xref py py-meth docutils literal notranslate"><span class="pre">device.current_device()</span></code>.) – Assign Parameter to given device. If device is a list of Device, a
copy will be made for each device.</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.TripletLoss.save">
<code class="sig-name descname">save</code><span class="sig-paren">(</span><em class="sig-param">prefix</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.TripletLoss.save" title="Permalink to this definition"></a></dt>
<dd><p>Save the model architecture and parameters to load again later</p>
<p>Saves the model architecture as a nested dictionary where each Block
in the model is a dictionary and its children are sub-dictionaries.</p>
<p>Each Block is uniquely identified by Block class name and a unique ID.
We save each Block’s parameter UUID to restore later in order to match
the saved parameters.</p>
<p>Recursively traverses a Block’s children in order (since its an
OrderedDict) and uses the unique ID to denote that specific Block.</p>
<p>Assumes that the model is created in an identical order every time.
If the model is not able to be recreated deterministically do not
use this set of APIs to save/load your model.</p>
<p>For HybridBlocks, the cached_graph is saved (Symbol &amp; inputs) if
it has already been hybridized.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>prefix</strong> (<em>str</em>) – The prefix to use in filenames for saving this model:
&lt;prefix&gt;-model.json and &lt;prefix&gt;-model.params</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.TripletLoss.save_parameters">
<code class="sig-name descname">save_parameters</code><span class="sig-paren">(</span><em class="sig-param">filename</em>, <em class="sig-param">deduplicate=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.TripletLoss.save_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Save parameters to file.</p>
<p>Saved parameters can only be loaded with <cite>load_parameters</cite>. Note that this
method only saves parameters, not model structure. If you want to save
model structures, please use <code class="xref py py-meth docutils literal notranslate"><span class="pre">HybridBlock.export()</span></code>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>filename</strong> (<em>str</em>) – Path to file.</p></li>
<li><p><strong>deduplicate</strong> (<em>bool</em><em>, </em><em>default False</em>) – If True, save shared parameters only once. Otherwise, if a Block
contains multiple sub-blocks that share parameters, each of the
shared parameters will be separately saved for every sub-block.</p></li>
</ul>
</dd>
</dl>
<p class="rubric">References</p>
<p><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.TripletLoss.setattr">
<code class="sig-name descname">setattr</code><span class="sig-paren">(</span><em class="sig-param">name</em>, <em class="sig-param">value</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.TripletLoss.setattr" title="Permalink to this definition"></a></dt>
<dd><p>Set an attribute to a new value for all Parameters.</p>
<p>For example, set grad_req to null if you don’t need gradient w.r.t a
model’s Parameters:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">setattr</span><span class="p">(</span><span class="s1">&#39;grad_req&#39;</span><span class="p">,</span> <span class="s1">&#39;null&#39;</span><span class="p">)</span>
</pre></div>
</div>
<p>or change the learning rate multiplier:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">setattr</span><span class="p">(</span><span class="s1">&#39;lr_mult&#39;</span><span class="p">,</span> <span class="mf">0.5</span><span class="p">)</span>
</pre></div>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>name</strong> (<em>str</em>) – Name of the attribute.</p></li>
<li><p><strong>value</strong> (<em>valid type for attribute name</em>) – The new value for the attribute.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.TripletLoss.share_parameters">
<code class="sig-name descname">share_parameters</code><span class="sig-paren">(</span><em class="sig-param">shared</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.TripletLoss.share_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Share parameters recursively inside the model.</p>
<p>For example, if you want <code class="docutils literal notranslate"><span class="pre">dense1</span></code> to share <code class="docutils literal notranslate"><span class="pre">dense0</span></code>’s weights, you can do:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">dense0</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">)</span>
<span class="n">dense1</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">)</span>
<span class="n">dense1</span><span class="o">.</span><span class="n">share_parameters</span><span class="p">(</span><span class="n">dense0</span><span class="o">.</span><span class="n">collect_params</span><span class="p">())</span>
</pre></div>
</div>
<dl class="simple">
<dt>which equals to</dt><dd><p>dense1.weight = dense0.weight
dense1.bias = dense0.bias</p>
</dd>
</dl>
<p>Note that unlike the <cite>load_parameters</cite> or <cite>load_dict</cite> functions,
<cite>share_parameters</cite> results in the <cite>Parameter</cite> object being shared (or
tied) between the models, whereas <cite>load_parameters</cite> or <cite>load_dict</cite> only
set the value of the data dictionary of a model. If you call
<cite>load_parameters</cite> or <cite>load_dict</cite> after <cite>share_parameters</cite>, the loaded
value will be reflected in all networks that use the shared (or tied)
<cite>Parameter</cite> object.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>shared</strong> (<em>Dict</em>) – Dict of the shared parameters.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>this block</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.TripletLoss.summary">
<code class="sig-name descname">summary</code><span class="sig-paren">(</span><em class="sig-param">*inputs</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.TripletLoss.summary" title="Permalink to this definition"></a></dt>
<dd><p>Print the summary of the model’s output and parameters.</p>
<p>The network must have been initialized, and must not have been hybridized.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>inputs</strong> (<em>object</em>) – Any input that the model supports. For any tensor in the input, only
<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.ndarray.NDArray</span></code></a> is supported.</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.TripletLoss.zero_grad">
<code class="sig-name descname">zero_grad</code><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.TripletLoss.zero_grad" title="Permalink to this definition"></a></dt>
<dd><p>Sets all Parameters’ gradient buffer to 0.</p>
</dd></dl>
</dd></dl>
<dl class="class">
<dt id="mxnet.gluon.loss.PoissonNLLLoss">
<em class="property">class </em><code class="sig-name descname">PoissonNLLLoss</code><span class="sig-paren">(</span><em class="sig-param">weight=None</em>, <em class="sig-param">from_logits=True</em>, <em class="sig-param">batch_axis=0</em>, <em class="sig-param">compute_full=False</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#PoissonNLLLoss"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.PoissonNLLLoss" title="Permalink to this definition"></a></dt>
<dd><p>Bases: <a class="reference internal" href="#mxnet.gluon.loss.Loss" title="mxnet.gluon.loss.Loss"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.loss.Loss</span></code></a></p>
<p>For a target (Random Variable) in a Poisson distribution, the function calculates the Negative
Log likelihood loss.
PoissonNLLLoss measures the loss accrued from a poisson regression prediction made by the model.</p>
<div class="math notranslate nohighlight">
\[L = \text{pred} - \text{target} * \log(\text{pred}) +\log(\text{target!})\]</div>
<p><strong>Methods</strong></p>
<table class="longtable docutils align-default">
<colgroup>
<col style="width: 10%" />
<col style="width: 90%" />
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.PoissonNLLLoss.apply" title="mxnet.gluon.loss.PoissonNLLLoss.apply"><code class="xref py py-obj docutils literal notranslate"><span class="pre">apply</span></code></a>(fn)</p></td>
<td><p>Applies <code class="docutils literal notranslate"><span class="pre">fn</span></code> recursively to every child block as well as self.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.PoissonNLLLoss.collect_params" title="mxnet.gluon.loss.PoissonNLLLoss.collect_params"><code class="xref py py-obj docutils literal notranslate"><span class="pre">collect_params</span></code></a>([select])</p></td>
<td><p>Returns a <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> containing this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and all of its children’s Parameters(default), also can returns the select <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> which match some given regular expressions.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.PoissonNLLLoss.export" title="mxnet.gluon.loss.PoissonNLLLoss.export"><code class="xref py py-obj docutils literal notranslate"><span class="pre">export</span></code></a>(path[, epoch, remove_amp_cast])</p></td>
<td><p>Export HybridBlock to json format that can be loaded by <cite>gluon.SymbolBlock.imports</cite> or the C++ interface.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.PoissonNLLLoss.forward" title="mxnet.gluon.loss.PoissonNLLLoss.forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">forward</span></code></a>(pred, target[, sample_weight, epsilon])</p></td>
<td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.PoissonNLLLoss.hybridize" title="mxnet.gluon.loss.PoissonNLLLoss.hybridize"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybridize</span></code></a>([active, partition_if_dynamic, …])</p></td>
<td><p>Activates or deactivates <code class="xref py py-class docutils literal notranslate"><span class="pre">HybridBlock</span></code> s recursively.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.PoissonNLLLoss.infer_shape" title="mxnet.gluon.loss.PoissonNLLLoss.infer_shape"><code class="xref py py-obj docutils literal notranslate"><span class="pre">infer_shape</span></code></a>(*args)</p></td>
<td><p>Infers shape of Parameters from inputs.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.PoissonNLLLoss.infer_type" title="mxnet.gluon.loss.PoissonNLLLoss.infer_type"><code class="xref py py-obj docutils literal notranslate"><span class="pre">infer_type</span></code></a>(*args)</p></td>
<td><p>Infers data type of Parameters from inputs.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.PoissonNLLLoss.initialize" title="mxnet.gluon.loss.PoissonNLLLoss.initialize"><code class="xref py py-obj docutils literal notranslate"><span class="pre">initialize</span></code></a>([init, device, verbose, force_reinit])</p></td>
<td><p>Initializes <code class="xref py py-class docutils literal notranslate"><span class="pre">Parameter</span></code> s of this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and its children.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.PoissonNLLLoss.load" title="mxnet.gluon.loss.PoissonNLLLoss.load"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load</span></code></a>(prefix)</p></td>
<td><p>Load a model saved using the <cite>save</cite> API</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.PoissonNLLLoss.load_dict" title="mxnet.gluon.loss.PoissonNLLLoss.load_dict"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load_dict</span></code></a>(param_dict[, device, …])</p></td>
<td><p>Load parameters from dict</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.PoissonNLLLoss.load_parameters" title="mxnet.gluon.loss.PoissonNLLLoss.load_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load_parameters</span></code></a>(filename[, device, …])</p></td>
<td><p>Load parameters from file previously saved by <cite>save_parameters</cite>.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.PoissonNLLLoss.optimize_for" title="mxnet.gluon.loss.PoissonNLLLoss.optimize_for"><code class="xref py py-obj docutils literal notranslate"><span class="pre">optimize_for</span></code></a>(x, *args[, backend, clear, …])</p></td>
<td><p>Partitions the current HybridBlock and optimizes it for a given backend without executing a forward pass.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.PoissonNLLLoss.register_forward_hook" title="mxnet.gluon.loss.PoissonNLLLoss.register_forward_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_forward_hook</span></code></a>(hook)</p></td>
<td><p>Registers a forward hook on the block.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.PoissonNLLLoss.register_forward_pre_hook" title="mxnet.gluon.loss.PoissonNLLLoss.register_forward_pre_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_forward_pre_hook</span></code></a>(hook)</p></td>
<td><p>Registers a forward pre-hook on the block.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.PoissonNLLLoss.register_op_hook" title="mxnet.gluon.loss.PoissonNLLLoss.register_op_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_op_hook</span></code></a>(callback[, monitor_all])</p></td>
<td><p>Install op hook for block recursively.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.PoissonNLLLoss.reset_ctx" title="mxnet.gluon.loss.PoissonNLLLoss.reset_ctx"><code class="xref py py-obj docutils literal notranslate"><span class="pre">reset_ctx</span></code></a>(ctx)</p></td>
<td><p>This function has been deprecated.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.PoissonNLLLoss.reset_device" title="mxnet.gluon.loss.PoissonNLLLoss.reset_device"><code class="xref py py-obj docutils literal notranslate"><span class="pre">reset_device</span></code></a>(device)</p></td>
<td><p>Re-assign all Parameters to other devices.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.PoissonNLLLoss.save" title="mxnet.gluon.loss.PoissonNLLLoss.save"><code class="xref py py-obj docutils literal notranslate"><span class="pre">save</span></code></a>(prefix)</p></td>
<td><p>Save the model architecture and parameters to load again later</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.PoissonNLLLoss.save_parameters" title="mxnet.gluon.loss.PoissonNLLLoss.save_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">save_parameters</span></code></a>(filename[, deduplicate])</p></td>
<td><p>Save parameters to file.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.PoissonNLLLoss.setattr" title="mxnet.gluon.loss.PoissonNLLLoss.setattr"><code class="xref py py-obj docutils literal notranslate"><span class="pre">setattr</span></code></a>(name, value)</p></td>
<td><p>Set an attribute to a new value for all Parameters.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.PoissonNLLLoss.share_parameters" title="mxnet.gluon.loss.PoissonNLLLoss.share_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">share_parameters</span></code></a>(shared)</p></td>
<td><p>Share parameters recursively inside the model.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.PoissonNLLLoss.summary" title="mxnet.gluon.loss.PoissonNLLLoss.summary"><code class="xref py py-obj docutils literal notranslate"><span class="pre">summary</span></code></a>(*inputs)</p></td>
<td><p>Print the summary of the model’s output and parameters.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.PoissonNLLLoss.zero_grad" title="mxnet.gluon.loss.PoissonNLLLoss.zero_grad"><code class="xref py py-obj docutils literal notranslate"><span class="pre">zero_grad</span></code></a>()</p></td>
<td><p>Sets all Parameters’ gradient buffer to 0.</p></td>
</tr>
</tbody>
</table>
<p><strong>Attributes</strong></p>
<table class="longtable docutils align-default">
<colgroup>
<col style="width: 10%" />
<col style="width: 90%" />
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.PoissonNLLLoss.params" title="mxnet.gluon.loss.PoissonNLLLoss.params"><code class="xref py py-obj docutils literal notranslate"><span class="pre">params</span></code></a></p></td>
<td><p>Returns this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code>’s parameter dictionary (does not include its children’s parameters).</p></td>
</tr>
</tbody>
</table>
<p><cite>target</cite>, ‘pred’ can have arbitrary shape as long as they have the same number of elements.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>from_logits</strong> (<em>boolean</em><em>, </em><em>default True</em>) – indicating whether log(predicted) value has already been computed. If True, the loss is computed as
<span class="math notranslate nohighlight">\(\exp(\text{pred}) - \text{target} * \text{pred}\)</span>, and if False, then loss is computed as
<span class="math notranslate nohighlight">\(\text{pred} - \text{target} * \log(\text{pred}+\text{epsilon})\)</span>.The default value</p></li>
<li><p><strong>weight</strong> (<em>float</em><em> or </em><em>None</em>) – Global scalar weight for loss.</p></li>
<li><p><strong>batch_axis</strong> (<em>int</em><em>, </em><em>default 0</em>) – The axis that represents mini-batch.</p></li>
<li><p><strong>compute_full</strong> (<em>boolean</em><em>, </em><em>default False</em>) – Indicates whether to add an approximation(Stirling factor) for the Factorial term in the formula for the loss.
The Stirling factor is:
<span class="math notranslate nohighlight">\(\text{target} * \log(\text{target}) - \text{target} + 0.5 * \log(2 * \pi * \text{target})\)</span></p></li>
<li><p><strong>epsilon</strong> (<em>float</em><em>, </em><em>default 1e-08</em>) – This is to avoid calculating log(0) which is not defined.</p></li>
</ul>
</dd>
</dl>
<dl class="simple">
<dt>Inputs:</dt><dd><ul class="simple">
<li><p><strong>pred</strong>: Predicted value</p></li>
<li><p><strong>target</strong>: Random variable(count or number) which belongs to a Poisson distribution.</p></li>
<li><p><strong>sample_weight</strong>: element-wise weighting tensor. Must be broadcastable
to the same shape as pred. For example, if pred has shape (64, 10)
and you want to weigh each sample in the batch separately,
sample_weight should have shape (64, 1).</p></li>
</ul>
</dd>
<dt>Outputs:</dt><dd><ul class="simple">
<li><p><strong>loss</strong>: Average loss (shape=(1,1)) of the loss tensor with shape (batch_size,).</p></li>
</ul>
</dd>
</dl>
<dl class="method">
<dt id="mxnet.gluon.loss.PoissonNLLLoss.apply">
<code class="sig-name descname">apply</code><span class="sig-paren">(</span><em class="sig-param">fn</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.PoissonNLLLoss.apply" title="Permalink to this definition"></a></dt>
<dd><p>Applies <code class="docutils literal notranslate"><span class="pre">fn</span></code> recursively to every child block as well as self.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>fn</strong> (<em>callable</em>) – Function to be applied to each submodule, of form <cite>fn(block)</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>this block</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.PoissonNLLLoss.collect_params">
<code class="sig-name descname">collect_params</code><span class="sig-paren">(</span><em class="sig-param">select=None</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.PoissonNLLLoss.collect_params" title="Permalink to this definition"></a></dt>
<dd><p>Returns a <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> containing this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and all of its
children’s Parameters(default), also can returns the select <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code>
which match some given regular expressions.</p>
<p>For example, collect the specified parameters in [‘conv1.weight’, ‘conv1.bias’, ‘fc.weight’,
‘fc.bias’]:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">collect_params</span><span class="p">(</span><span class="s1">&#39;conv1.weight|conv1.bias|fc.weight|fc.bias&#39;</span><span class="p">)</span>
</pre></div>
</div>
<p>or collect all parameters whose names end with ‘weight’ or ‘bias’, this can be done
using regular expressions:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">collect_params</span><span class="p">(</span><span class="s1">&#39;.*weight|.*bias&#39;</span><span class="p">)</span>
</pre></div>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>select</strong> (<em>str</em>) – regular expressions</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>The selected <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.PoissonNLLLoss.export">
<code class="sig-name descname">export</code><span class="sig-paren">(</span><em class="sig-param">path</em>, <em class="sig-param">epoch=0</em>, <em class="sig-param">remove_amp_cast=True</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.PoissonNLLLoss.export" title="Permalink to this definition"></a></dt>
<dd><p>Export HybridBlock to json format that can be loaded by
<cite>gluon.SymbolBlock.imports</cite> or the C++ interface.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>When there are only one input, it will have name <cite>data</cite>. When there
Are more than one inputs, they will be named as <cite>data0</cite>, <cite>data1</cite>, etc.</p>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>path</strong> (<em>str</em><em> or </em><em>None</em>) – Path to save model. Two files <cite>path-symbol.json</cite> and <cite>path-xxxx.params</cite>
will be created, where xxxx is the 4 digits epoch number.
If None, do not export to file but return Python Symbol object and
corresponding dictionary of parameters.</p></li>
<li><p><strong>epoch</strong> (<em>int</em>) – Epoch number of saved model.</p></li>
<li><p><strong>remove_amp_cast</strong> (<em>bool</em><em>, </em><em>optional</em>) – Whether to remove the amp_cast and amp_multicast operators, before saving the model.</p></li>
</ul>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p><ul class="simple">
<li><p><strong>symbol_filename</strong> (<em>str</em>) – Filename to which model symbols were saved, including <cite>path</cite> prefix.</p></li>
<li><p><strong>params_filename</strong> (<em>str</em>) – Filename to which model parameters were saved, including <cite>path</cite> prefix.</p></li>
</ul>
</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.PoissonNLLLoss.forward">
<code class="sig-name descname">forward</code><span class="sig-paren">(</span><em class="sig-param">pred</em>, <em class="sig-param">target</em>, <em class="sig-param">sample_weight=None</em>, <em class="sig-param">epsilon=1e-08</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#PoissonNLLLoss.forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.PoissonNLLLoss.forward" title="Permalink to this definition"></a></dt>
<dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>x</strong> (<a class="reference internal" href="../../legacy/symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li>
<li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.PoissonNLLLoss.hybridize">
<code class="sig-name descname">hybridize</code><span class="sig-paren">(</span><em class="sig-param">active=True</em>, <em class="sig-param">partition_if_dynamic=True</em>, <em class="sig-param">static_alloc=False</em>, <em class="sig-param">static_shape=False</em>, <em class="sig-param">inline_limit=2</em>, <em class="sig-param">forward_bulk_size=None</em>, <em class="sig-param">backward_bulk_size=None</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.PoissonNLLLoss.hybridize" title="Permalink to this definition"></a></dt>
<dd><p>Activates or deactivates <code class="xref py py-class docutils literal notranslate"><span class="pre">HybridBlock</span></code> s recursively. Has no effect on
non-hybrid children.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>active</strong> (<em>bool</em><em>, </em><em>default True</em>) – Whether to turn hybrid on or off.</p></li>
<li><p><strong>partition_if_dynamic</strong> (<em>bool</em><em>, </em><em>default False</em>) – whether to partition the graph when dynamic shape op exists</p></li>
<li><p><strong>static_alloc</strong> (<em>bool</em><em>, </em><em>default False</em>) – Statically allocate memory to improve speed. Memory usage may increase.</p></li>
<li><p><strong>static_shape</strong> (<em>bool</em><em>, </em><em>default False</em>) – Optimize for invariant input shapes between iterations. Must also
set static_alloc to True. Change of input shapes is still allowed
but slower.</p></li>
<li><p><strong>inline_limit</strong> (<em>optional int</em><em>, </em><em>default 2</em>) – Maximum number of operators that can be inlined.</p></li>
<li><p><strong>forward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during forward pass.</p></li>
<li><p><strong>backward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during backward pass.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.PoissonNLLLoss.infer_shape">
<code class="sig-name descname">infer_shape</code><span class="sig-paren">(</span><em class="sig-param">*args</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.PoissonNLLLoss.infer_shape" title="Permalink to this definition"></a></dt>
<dd><p>Infers shape of Parameters from inputs.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.PoissonNLLLoss.infer_type">
<code class="sig-name descname">infer_type</code><span class="sig-paren">(</span><em class="sig-param">*args</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.PoissonNLLLoss.infer_type" title="Permalink to this definition"></a></dt>
<dd><p>Infers data type of Parameters from inputs.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.PoissonNLLLoss.initialize">
<code class="sig-name descname">initialize</code><span class="sig-paren">(</span><em class="sig-param">init=&lt;mxnet.initializer.Uniform object&gt;</em>, <em class="sig-param">device=None</em>, <em class="sig-param">verbose=False</em>, <em class="sig-param">force_reinit=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.PoissonNLLLoss.initialize" title="Permalink to this definition"></a></dt>
<dd><p>Initializes <code class="xref py py-class docutils literal notranslate"><span class="pre">Parameter</span></code> s of this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and its children.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>init</strong> (<a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Global default Initializer to be used when <code class="xref py py-meth docutils literal notranslate"><span class="pre">Parameter.init()</span></code> is <code class="docutils literal notranslate"><span class="pre">None</span></code>.
Otherwise, <code class="xref py py-meth docutils literal notranslate"><span class="pre">Parameter.init()</span></code> takes precedence.</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em> or </em><em>list of Device</em>) – Keeps a copy of Parameters on one or many device(s).</p></li>
<li><p><strong>verbose</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to verbosely print out details on initialization.</p></li>
<li><p><strong>force_reinit</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to force re-initialization if parameter is already initialized.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.PoissonNLLLoss.load">
<code class="sig-name descname">load</code><span class="sig-paren">(</span><em class="sig-param">prefix</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.PoissonNLLLoss.load" title="Permalink to this definition"></a></dt>
<dd><p>Load a model saved using the <cite>save</cite> API</p>
<p>Reconfigures a model using the saved configuration. This function
does not regenerate the model architecture. It resets each Block’s
parameter UUIDs as they were when saved in order to match the names of the
saved parameters.</p>
<p>This function assumes the Blocks in the model were created in the same
order they were when the model was saved. This is because each Block is
uniquely identified by Block class name and a unique ID in order (since
its an OrderedDict) and uses the unique ID to denote that specific Block.</p>
<p>Assumes that the model is created in an identical order every time.
If the model is not able to be recreated deterministically do not
use this set of APIs to save/load your model.</p>
<p>For HybridBlocks, the cached_graph (Symbol &amp; inputs) and settings are
restored if it had been hybridized before saving.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>prefix</strong> (<em>str</em>) – The prefix to use in filenames for loading this model:
&lt;prefix&gt;-model.json and &lt;prefix&gt;-model.params</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.PoissonNLLLoss.load_dict">
<code class="sig-name descname">load_dict</code><span class="sig-paren">(</span><em class="sig-param">param_dict</em>, <em class="sig-param">device=None</em>, <em class="sig-param">allow_missing=False</em>, <em class="sig-param">ignore_extra=False</em>, <em class="sig-param">cast_dtype=False</em>, <em class="sig-param">dtype_source='current'</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.PoissonNLLLoss.load_dict" title="Permalink to this definition"></a></dt>
<dd><p>Load parameters from dict</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>param_dict</strong> (<em>dict</em>) – Dictionary containing model parameters</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em>, </em><em>optional</em>) – Device context on which the memory is allocated. Default is
<cite>mxnet.device.current_device()</cite>.</p></li>
<li><p><strong>allow_missing</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently skip loading parameters not represented in the file.</p></li>
<li><p><strong>ignore_extra</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently ignore parameters from the file that are not
present in this dict.</p></li>
<li><p><strong>cast_dtype</strong> (<em>bool</em><em>, </em><em>default False</em>) – Cast the data type of the NDArray loaded from the checkpoint to the dtype
provided by the Parameter if any</p></li>
<li><p><strong>dtype_source</strong> (<em>str</em><em>, </em><em>default 'current'</em>) – must be in {‘current’, ‘saved’}
Only valid if cast_dtype=True, specify the source of the dtype for casting
the parameters</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.PoissonNLLLoss.load_parameters">
<code class="sig-name descname">load_parameters</code><span class="sig-paren">(</span><em class="sig-param">filename</em>, <em class="sig-param">device=None</em>, <em class="sig-param">allow_missing=False</em>, <em class="sig-param">ignore_extra=False</em>, <em class="sig-param">cast_dtype=False</em>, <em class="sig-param">dtype_source='current'</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.PoissonNLLLoss.load_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Load parameters from file previously saved by <cite>save_parameters</cite>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>filename</strong> (<em>str</em>) – Path to parameter file.</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em> or </em><em>list of Device</em><em>, </em><em>default cpu</em><em>(</em><em>)</em>) – Device(s) to initialize loaded parameters on.</p></li>
<li><p><strong>allow_missing</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently skip loading parameters not represents in the file.</p></li>
<li><p><strong>ignore_extra</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently ignore parameters from the file that are not
present in this Block.</p></li>
<li><p><strong>cast_dtype</strong> (<em>bool</em><em>, </em><em>default False</em>) – Cast the data type of the NDArray loaded from the checkpoint to the dtype
provided by the Parameter if any.</p></li>
<li><p><strong>dtype_source</strong> (<em>str</em><em>, </em><em>default 'current'</em>) – must be in {‘current’, ‘saved’}
Only valid if cast_dtype=True, specify the source of the dtype for casting
the parameters</p></li>
</ul>
</dd>
</dl>
<p class="rubric">References</p>
<p><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.PoissonNLLLoss.optimize_for">
<code class="sig-name descname">optimize_for</code><span class="sig-paren">(</span><em class="sig-param">x</em>, <em class="sig-param">*args</em>, <em class="sig-param">backend=None</em>, <em class="sig-param">clear=False</em>, <em class="sig-param">partition_if_dynamic=True</em>, <em class="sig-param">static_alloc=False</em>, <em class="sig-param">static_shape=False</em>, <em class="sig-param">inline_limit=2</em>, <em class="sig-param">forward_bulk_size=None</em>, <em class="sig-param">backward_bulk_size=None</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.PoissonNLLLoss.optimize_for" title="Permalink to this definition"></a></dt>
<dd><p>Partitions the current HybridBlock and optimizes it for a given backend
without executing a forward pass. Modifies the HybridBlock in-place.</p>
<p>Immediately partitions a HybridBlock using the specified backend. Combines
the work done in the hybridize API with part of the work done in the forward
pass without calling the CachedOp. Can be used in place of hybridize,
afterwards <cite>export</cite> can be called or inference can be run. See README.md in
example/extensions/lib_subgraph/README.md for more details.</p>
<p class="rubric">Examples</p>
<p># partition and then export to file
block.optimize_for(x, backend=’myPart’)
block.export(‘partitioned’)</p>
<p># partition and then run inference
block.optimize_for(x, backend=’myPart’)
block(x)</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>x</strong> (<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – first input to model</p></li>
<li><p><strong>*args</strong> (<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – other inputs to model</p></li>
<li><p><strong>backend</strong> (<em>str</em>) – The name of backend, as registered in <cite>SubgraphBackendRegistry</cite>, default None</p></li>
<li><p><strong>backend_opts</strong> (<em>dict of user-specified options to pass to the backend for partitioning</em><em>, </em><em>optional</em>) – Passed on to <cite>PrePartition</cite> and <cite>PostPartition</cite> functions of <cite>SubgraphProperty</cite></p></li>
<li><p><strong>clear</strong> (<em>bool</em><em>, </em><em>default False</em>) – clears any previous optimizations</p></li>
<li><p><strong>partition_if_dynamic</strong> (<em>bool</em><em>, </em><em>default False</em>) – whether to partition the graph when dynamic shape op exists</p></li>
<li><p><strong>static_alloc</strong> (<em>bool</em><em>, </em><em>default False</em>) – Statically allocate memory to improve speed. Memory usage may increase.</p></li>
<li><p><strong>static_shape</strong> (<em>bool</em><em>, </em><em>default False</em>) – Optimize for invariant input shapes between iterations. Must also
set static_alloc to True. Change of input shapes is still allowed
but slower.</p></li>
<li><p><strong>inline_limit</strong> (<em>optional int</em><em>, </em><em>default 2</em>) – Maximum number of operators that can be inlined.</p></li>
<li><p><strong>forward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during forward pass.</p></li>
<li><p><strong>backward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during backward pass.</p></li>
<li><p><strong>**kwargs</strong> (<em>The backend options</em><em>, </em><em>optional</em>) – Passed on to <cite>PrePartition</cite> and <cite>PostPartition</cite> functions of <cite>SubgraphProperty</cite></p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.PoissonNLLLoss.params">
<em class="property">property </em><code class="sig-name descname">params</code><a class="headerlink" href="#mxnet.gluon.loss.PoissonNLLLoss.params" title="Permalink to this definition"></a></dt>
<dd><p>Returns this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code>’s parameter dictionary (does not include its
children’s parameters).</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.PoissonNLLLoss.register_forward_hook">
<code class="sig-name descname">register_forward_hook</code><span class="sig-paren">(</span><em class="sig-param">hook</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.PoissonNLLLoss.register_forward_hook" title="Permalink to this definition"></a></dt>
<dd><p>Registers a forward hook on the block.</p>
<p>The hook function is called immediately after <a class="reference internal" href="#mxnet.gluon.loss.PoissonNLLLoss.forward" title="mxnet.gluon.loss.PoissonNLLLoss.forward"><code class="xref py py-func docutils literal notranslate"><span class="pre">forward()</span></code></a>.
It should not modify the input or output.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>hook</strong> (<em>callable</em>) – The forward hook function of form <cite>hook(block, input, output) -&gt; None</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.utils.HookHandle</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.PoissonNLLLoss.register_forward_pre_hook">
<code class="sig-name descname">register_forward_pre_hook</code><span class="sig-paren">(</span><em class="sig-param">hook</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.PoissonNLLLoss.register_forward_pre_hook" title="Permalink to this definition"></a></dt>
<dd><p>Registers a forward pre-hook on the block.</p>
<p>The hook function is called immediately before <a class="reference internal" href="#mxnet.gluon.loss.PoissonNLLLoss.forward" title="mxnet.gluon.loss.PoissonNLLLoss.forward"><code class="xref py py-func docutils literal notranslate"><span class="pre">forward()</span></code></a>.
It should not modify the input or output.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>hook</strong> (<em>callable</em>) – The forward hook function of form <cite>hook(block, input) -&gt; None</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.utils.HookHandle</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.PoissonNLLLoss.register_op_hook">
<code class="sig-name descname">register_op_hook</code><span class="sig-paren">(</span><em class="sig-param">callback</em>, <em class="sig-param">monitor_all=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.PoissonNLLLoss.register_op_hook" title="Permalink to this definition"></a></dt>
<dd><p>Install op hook for block recursively.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>callback</strong> (<em>function</em>) – Function called to inspect the values of the intermediate outputs
of blocks after hybridization. It takes 3 parameters:
name of the tensor being inspected (str)
name of the operator producing or consuming that tensor (str)
tensor being inspected (NDArray).</p></li>
<li><p><strong>monitor_all</strong> (<em>bool</em><em>, </em><em>default False</em>) – If True, monitor both input and output, otherwise monitor output only.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.PoissonNLLLoss.reset_ctx">
<code class="sig-name descname">reset_ctx</code><span class="sig-paren">(</span><em class="sig-param">ctx</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.PoissonNLLLoss.reset_ctx" title="Permalink to this definition"></a></dt>
<dd><p>This function has been deprecated. Please refer to <code class="docutils literal notranslate"><span class="pre">HybridBlock.reset_device</span></code>.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.PoissonNLLLoss.reset_device">
<code class="sig-name descname">reset_device</code><span class="sig-paren">(</span><em class="sig-param">device</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.PoissonNLLLoss.reset_device" title="Permalink to this definition"></a></dt>
<dd><p>Re-assign all Parameters to other devices. If the Block is hybridized, it will reset the _cached_op_args.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>device</strong> (Device or list of Device, default <code class="xref py py-meth docutils literal notranslate"><span class="pre">device.current_device()</span></code>.) – Assign Parameter to given device. If device is a list of Device, a
copy will be made for each device.</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.PoissonNLLLoss.save">
<code class="sig-name descname">save</code><span class="sig-paren">(</span><em class="sig-param">prefix</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.PoissonNLLLoss.save" title="Permalink to this definition"></a></dt>
<dd><p>Save the model architecture and parameters to load again later</p>
<p>Saves the model architecture as a nested dictionary where each Block
in the model is a dictionary and its children are sub-dictionaries.</p>
<p>Each Block is uniquely identified by Block class name and a unique ID.
We save each Block’s parameter UUID to restore later in order to match
the saved parameters.</p>
<p>Recursively traverses a Block’s children in order (since its an
OrderedDict) and uses the unique ID to denote that specific Block.</p>
<p>Assumes that the model is created in an identical order every time.
If the model is not able to be recreated deterministically do not
use this set of APIs to save/load your model.</p>
<p>For HybridBlocks, the cached_graph is saved (Symbol &amp; inputs) if
it has already been hybridized.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>prefix</strong> (<em>str</em>) – The prefix to use in filenames for saving this model:
&lt;prefix&gt;-model.json and &lt;prefix&gt;-model.params</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.PoissonNLLLoss.save_parameters">
<code class="sig-name descname">save_parameters</code><span class="sig-paren">(</span><em class="sig-param">filename</em>, <em class="sig-param">deduplicate=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.PoissonNLLLoss.save_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Save parameters to file.</p>
<p>Saved parameters can only be loaded with <cite>load_parameters</cite>. Note that this
method only saves parameters, not model structure. If you want to save
model structures, please use <code class="xref py py-meth docutils literal notranslate"><span class="pre">HybridBlock.export()</span></code>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>filename</strong> (<em>str</em>) – Path to file.</p></li>
<li><p><strong>deduplicate</strong> (<em>bool</em><em>, </em><em>default False</em>) – If True, save shared parameters only once. Otherwise, if a Block
contains multiple sub-blocks that share parameters, each of the
shared parameters will be separately saved for every sub-block.</p></li>
</ul>
</dd>
</dl>
<p class="rubric">References</p>
<p><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.PoissonNLLLoss.setattr">
<code class="sig-name descname">setattr</code><span class="sig-paren">(</span><em class="sig-param">name</em>, <em class="sig-param">value</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.PoissonNLLLoss.setattr" title="Permalink to this definition"></a></dt>
<dd><p>Set an attribute to a new value for all Parameters.</p>
<p>For example, set grad_req to null if you don’t need gradient w.r.t a
model’s Parameters:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">setattr</span><span class="p">(</span><span class="s1">&#39;grad_req&#39;</span><span class="p">,</span> <span class="s1">&#39;null&#39;</span><span class="p">)</span>
</pre></div>
</div>
<p>or change the learning rate multiplier:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">setattr</span><span class="p">(</span><span class="s1">&#39;lr_mult&#39;</span><span class="p">,</span> <span class="mf">0.5</span><span class="p">)</span>
</pre></div>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>name</strong> (<em>str</em>) – Name of the attribute.</p></li>
<li><p><strong>value</strong> (<em>valid type for attribute name</em>) – The new value for the attribute.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.PoissonNLLLoss.share_parameters">
<code class="sig-name descname">share_parameters</code><span class="sig-paren">(</span><em class="sig-param">shared</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.PoissonNLLLoss.share_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Share parameters recursively inside the model.</p>
<p>For example, if you want <code class="docutils literal notranslate"><span class="pre">dense1</span></code> to share <code class="docutils literal notranslate"><span class="pre">dense0</span></code>’s weights, you can do:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">dense0</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">)</span>
<span class="n">dense1</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">)</span>
<span class="n">dense1</span><span class="o">.</span><span class="n">share_parameters</span><span class="p">(</span><span class="n">dense0</span><span class="o">.</span><span class="n">collect_params</span><span class="p">())</span>
</pre></div>
</div>
<dl class="simple">
<dt>which equals to</dt><dd><p>dense1.weight = dense0.weight
dense1.bias = dense0.bias</p>
</dd>
</dl>
<p>Note that unlike the <cite>load_parameters</cite> or <cite>load_dict</cite> functions,
<cite>share_parameters</cite> results in the <cite>Parameter</cite> object being shared (or
tied) between the models, whereas <cite>load_parameters</cite> or <cite>load_dict</cite> only
set the value of the data dictionary of a model. If you call
<cite>load_parameters</cite> or <cite>load_dict</cite> after <cite>share_parameters</cite>, the loaded
value will be reflected in all networks that use the shared (or tied)
<cite>Parameter</cite> object.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>shared</strong> (<em>Dict</em>) – Dict of the shared parameters.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>this block</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.PoissonNLLLoss.summary">
<code class="sig-name descname">summary</code><span class="sig-paren">(</span><em class="sig-param">*inputs</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.PoissonNLLLoss.summary" title="Permalink to this definition"></a></dt>
<dd><p>Print the summary of the model’s output and parameters.</p>
<p>The network must have been initialized, and must not have been hybridized.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>inputs</strong> (<em>object</em>) – Any input that the model supports. For any tensor in the input, only
<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.ndarray.NDArray</span></code></a> is supported.</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.PoissonNLLLoss.zero_grad">
<code class="sig-name descname">zero_grad</code><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.PoissonNLLLoss.zero_grad" title="Permalink to this definition"></a></dt>
<dd><p>Sets all Parameters’ gradient buffer to 0.</p>
</dd></dl>
</dd></dl>
<dl class="class">
<dt id="mxnet.gluon.loss.CosineEmbeddingLoss">
<em class="property">class </em><code class="sig-name descname">CosineEmbeddingLoss</code><span class="sig-paren">(</span><em class="sig-param">weight=None</em>, <em class="sig-param">batch_axis=0</em>, <em class="sig-param">margin=0</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#CosineEmbeddingLoss"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.CosineEmbeddingLoss" title="Permalink to this definition"></a></dt>
<dd><p>Bases: <a class="reference internal" href="#mxnet.gluon.loss.Loss" title="mxnet.gluon.loss.Loss"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.loss.Loss</span></code></a></p>
<p>For a target label 1 or -1, vectors input1 and input2, the function computes the cosine distance
between the vectors. This can be interpreted as how similar/dissimilar two input vectors are.</p>
<div class="math notranslate nohighlight">
\[\begin{split}L = \sum_i \begin{cases} 1 - {cos\_sim({input1}_i, {input2}_i)} &amp; \text{ if } {label}_i = 1\\
{cos\_sim({input1}_i, {input2}_i)} &amp; \text{ if } {label}_i = -1 \end{cases}\\
cos\_sim(input1, input2) = \frac{{input1}_i.{input2}_i}{||{input1}_i||.||{input2}_i||}\end{split}\]</div>
<p><strong>Methods</strong></p>
<table class="longtable docutils align-default">
<colgroup>
<col style="width: 10%" />
<col style="width: 90%" />
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CosineEmbeddingLoss.apply" title="mxnet.gluon.loss.CosineEmbeddingLoss.apply"><code class="xref py py-obj docutils literal notranslate"><span class="pre">apply</span></code></a>(fn)</p></td>
<td><p>Applies <code class="docutils literal notranslate"><span class="pre">fn</span></code> recursively to every child block as well as self.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CosineEmbeddingLoss.collect_params" title="mxnet.gluon.loss.CosineEmbeddingLoss.collect_params"><code class="xref py py-obj docutils literal notranslate"><span class="pre">collect_params</span></code></a>([select])</p></td>
<td><p>Returns a <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> containing this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and all of its children’s Parameters(default), also can returns the select <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> which match some given regular expressions.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CosineEmbeddingLoss.export" title="mxnet.gluon.loss.CosineEmbeddingLoss.export"><code class="xref py py-obj docutils literal notranslate"><span class="pre">export</span></code></a>(path[, epoch, remove_amp_cast])</p></td>
<td><p>Export HybridBlock to json format that can be loaded by <cite>gluon.SymbolBlock.imports</cite> or the C++ interface.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CosineEmbeddingLoss.forward" title="mxnet.gluon.loss.CosineEmbeddingLoss.forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">forward</span></code></a>(input1, input2, label[, sample_weight])</p></td>
<td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CosineEmbeddingLoss.hybridize" title="mxnet.gluon.loss.CosineEmbeddingLoss.hybridize"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybridize</span></code></a>([active, partition_if_dynamic, …])</p></td>
<td><p>Activates or deactivates <code class="xref py py-class docutils literal notranslate"><span class="pre">HybridBlock</span></code> s recursively.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CosineEmbeddingLoss.infer_shape" title="mxnet.gluon.loss.CosineEmbeddingLoss.infer_shape"><code class="xref py py-obj docutils literal notranslate"><span class="pre">infer_shape</span></code></a>(*args)</p></td>
<td><p>Infers shape of Parameters from inputs.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CosineEmbeddingLoss.infer_type" title="mxnet.gluon.loss.CosineEmbeddingLoss.infer_type"><code class="xref py py-obj docutils literal notranslate"><span class="pre">infer_type</span></code></a>(*args)</p></td>
<td><p>Infers data type of Parameters from inputs.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CosineEmbeddingLoss.initialize" title="mxnet.gluon.loss.CosineEmbeddingLoss.initialize"><code class="xref py py-obj docutils literal notranslate"><span class="pre">initialize</span></code></a>([init, device, verbose, force_reinit])</p></td>
<td><p>Initializes <code class="xref py py-class docutils literal notranslate"><span class="pre">Parameter</span></code> s of this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and its children.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CosineEmbeddingLoss.load" title="mxnet.gluon.loss.CosineEmbeddingLoss.load"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load</span></code></a>(prefix)</p></td>
<td><p>Load a model saved using the <cite>save</cite> API</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CosineEmbeddingLoss.load_dict" title="mxnet.gluon.loss.CosineEmbeddingLoss.load_dict"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load_dict</span></code></a>(param_dict[, device, …])</p></td>
<td><p>Load parameters from dict</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CosineEmbeddingLoss.load_parameters" title="mxnet.gluon.loss.CosineEmbeddingLoss.load_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load_parameters</span></code></a>(filename[, device, …])</p></td>
<td><p>Load parameters from file previously saved by <cite>save_parameters</cite>.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CosineEmbeddingLoss.optimize_for" title="mxnet.gluon.loss.CosineEmbeddingLoss.optimize_for"><code class="xref py py-obj docutils literal notranslate"><span class="pre">optimize_for</span></code></a>(x, *args[, backend, clear, …])</p></td>
<td><p>Partitions the current HybridBlock and optimizes it for a given backend without executing a forward pass.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CosineEmbeddingLoss.register_forward_hook" title="mxnet.gluon.loss.CosineEmbeddingLoss.register_forward_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_forward_hook</span></code></a>(hook)</p></td>
<td><p>Registers a forward hook on the block.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CosineEmbeddingLoss.register_forward_pre_hook" title="mxnet.gluon.loss.CosineEmbeddingLoss.register_forward_pre_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_forward_pre_hook</span></code></a>(hook)</p></td>
<td><p>Registers a forward pre-hook on the block.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CosineEmbeddingLoss.register_op_hook" title="mxnet.gluon.loss.CosineEmbeddingLoss.register_op_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_op_hook</span></code></a>(callback[, monitor_all])</p></td>
<td><p>Install op hook for block recursively.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CosineEmbeddingLoss.reset_ctx" title="mxnet.gluon.loss.CosineEmbeddingLoss.reset_ctx"><code class="xref py py-obj docutils literal notranslate"><span class="pre">reset_ctx</span></code></a>(ctx)</p></td>
<td><p>This function has been deprecated.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CosineEmbeddingLoss.reset_device" title="mxnet.gluon.loss.CosineEmbeddingLoss.reset_device"><code class="xref py py-obj docutils literal notranslate"><span class="pre">reset_device</span></code></a>(device)</p></td>
<td><p>Re-assign all Parameters to other devices.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CosineEmbeddingLoss.save" title="mxnet.gluon.loss.CosineEmbeddingLoss.save"><code class="xref py py-obj docutils literal notranslate"><span class="pre">save</span></code></a>(prefix)</p></td>
<td><p>Save the model architecture and parameters to load again later</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CosineEmbeddingLoss.save_parameters" title="mxnet.gluon.loss.CosineEmbeddingLoss.save_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">save_parameters</span></code></a>(filename[, deduplicate])</p></td>
<td><p>Save parameters to file.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CosineEmbeddingLoss.setattr" title="mxnet.gluon.loss.CosineEmbeddingLoss.setattr"><code class="xref py py-obj docutils literal notranslate"><span class="pre">setattr</span></code></a>(name, value)</p></td>
<td><p>Set an attribute to a new value for all Parameters.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CosineEmbeddingLoss.share_parameters" title="mxnet.gluon.loss.CosineEmbeddingLoss.share_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">share_parameters</span></code></a>(shared)</p></td>
<td><p>Share parameters recursively inside the model.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CosineEmbeddingLoss.summary" title="mxnet.gluon.loss.CosineEmbeddingLoss.summary"><code class="xref py py-obj docutils literal notranslate"><span class="pre">summary</span></code></a>(*inputs)</p></td>
<td><p>Print the summary of the model’s output and parameters.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CosineEmbeddingLoss.zero_grad" title="mxnet.gluon.loss.CosineEmbeddingLoss.zero_grad"><code class="xref py py-obj docutils literal notranslate"><span class="pre">zero_grad</span></code></a>()</p></td>
<td><p>Sets all Parameters’ gradient buffer to 0.</p></td>
</tr>
</tbody>
</table>
<p><strong>Attributes</strong></p>
<table class="longtable docutils align-default">
<colgroup>
<col style="width: 10%" />
<col style="width: 90%" />
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CosineEmbeddingLoss.params" title="mxnet.gluon.loss.CosineEmbeddingLoss.params"><code class="xref py py-obj docutils literal notranslate"><span class="pre">params</span></code></a></p></td>
<td><p>Returns this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code>’s parameter dictionary (does not include its children’s parameters).</p></td>
</tr>
</tbody>
</table>
<p><cite>input1</cite>, <cite>input2</cite> can have arbitrary shape as long as they have the same number of elements.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>weight</strong> (<em>float</em><em> or </em><em>None</em>) – Global scalar weight for loss.</p></li>
<li><p><strong>batch_axis</strong> (<em>int</em><em>, </em><em>default 0</em>) – The axis that represents mini-batch.</p></li>
<li><p><strong>margin</strong> (<em>float</em>) – Margin of separation between correct and incorrect pair.</p></li>
</ul>
</dd>
</dl>
<dl class="simple">
<dt>Inputs:</dt><dd><ul class="simple">
<li><p><strong>input1</strong>: a tensor with arbitrary shape</p></li>
<li><p><strong>input2</strong>: another tensor with same shape as pred to which input1 is
compared for similarity and loss calculation</p></li>
<li><p><strong>label</strong>: A 1-D tensor indicating for each pair input1 and input2, target label is 1 or -1</p></li>
<li><p><strong>sample_weight</strong>: element-wise weighting tensor. Must be broadcastable
to the same shape as input1. For example, if input1 has shape (64, 10)
and you want to weigh each sample in the batch separately,
sample_weight should have shape (64, 1).</p></li>
</ul>
</dd>
<dt>Outputs:</dt><dd><ul class="simple">
<li><p><strong>loss</strong>: The loss tensor with shape (batch_size,).</p></li>
</ul>
</dd>
</dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CosineEmbeddingLoss.apply">
<code class="sig-name descname">apply</code><span class="sig-paren">(</span><em class="sig-param">fn</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CosineEmbeddingLoss.apply" title="Permalink to this definition"></a></dt>
<dd><p>Applies <code class="docutils literal notranslate"><span class="pre">fn</span></code> recursively to every child block as well as self.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>fn</strong> (<em>callable</em>) – Function to be applied to each submodule, of form <cite>fn(block)</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>this block</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CosineEmbeddingLoss.collect_params">
<code class="sig-name descname">collect_params</code><span class="sig-paren">(</span><em class="sig-param">select=None</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CosineEmbeddingLoss.collect_params" title="Permalink to this definition"></a></dt>
<dd><p>Returns a <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> containing this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and all of its
children’s Parameters(default), also can returns the select <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code>
which match some given regular expressions.</p>
<p>For example, collect the specified parameters in [‘conv1.weight’, ‘conv1.bias’, ‘fc.weight’,
‘fc.bias’]:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">collect_params</span><span class="p">(</span><span class="s1">&#39;conv1.weight|conv1.bias|fc.weight|fc.bias&#39;</span><span class="p">)</span>
</pre></div>
</div>
<p>or collect all parameters whose names end with ‘weight’ or ‘bias’, this can be done
using regular expressions:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">collect_params</span><span class="p">(</span><span class="s1">&#39;.*weight|.*bias&#39;</span><span class="p">)</span>
</pre></div>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>select</strong> (<em>str</em>) – regular expressions</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>The selected <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CosineEmbeddingLoss.export">
<code class="sig-name descname">export</code><span class="sig-paren">(</span><em class="sig-param">path</em>, <em class="sig-param">epoch=0</em>, <em class="sig-param">remove_amp_cast=True</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CosineEmbeddingLoss.export" title="Permalink to this definition"></a></dt>
<dd><p>Export HybridBlock to json format that can be loaded by
<cite>gluon.SymbolBlock.imports</cite> or the C++ interface.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>When there are only one input, it will have name <cite>data</cite>. When there
Are more than one inputs, they will be named as <cite>data0</cite>, <cite>data1</cite>, etc.</p>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>path</strong> (<em>str</em><em> or </em><em>None</em>) – Path to save model. Two files <cite>path-symbol.json</cite> and <cite>path-xxxx.params</cite>
will be created, where xxxx is the 4 digits epoch number.
If None, do not export to file but return Python Symbol object and
corresponding dictionary of parameters.</p></li>
<li><p><strong>epoch</strong> (<em>int</em>) – Epoch number of saved model.</p></li>
<li><p><strong>remove_amp_cast</strong> (<em>bool</em><em>, </em><em>optional</em>) – Whether to remove the amp_cast and amp_multicast operators, before saving the model.</p></li>
</ul>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p><ul class="simple">
<li><p><strong>symbol_filename</strong> (<em>str</em>) – Filename to which model symbols were saved, including <cite>path</cite> prefix.</p></li>
<li><p><strong>params_filename</strong> (<em>str</em>) – Filename to which model parameters were saved, including <cite>path</cite> prefix.</p></li>
</ul>
</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CosineEmbeddingLoss.forward">
<code class="sig-name descname">forward</code><span class="sig-paren">(</span><em class="sig-param">input1</em>, <em class="sig-param">input2</em>, <em class="sig-param">label</em>, <em class="sig-param">sample_weight=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#CosineEmbeddingLoss.forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.CosineEmbeddingLoss.forward" title="Permalink to this definition"></a></dt>
<dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>x</strong> (<a class="reference internal" href="../../legacy/symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li>
<li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CosineEmbeddingLoss.hybridize">
<code class="sig-name descname">hybridize</code><span class="sig-paren">(</span><em class="sig-param">active=True</em>, <em class="sig-param">partition_if_dynamic=True</em>, <em class="sig-param">static_alloc=False</em>, <em class="sig-param">static_shape=False</em>, <em class="sig-param">inline_limit=2</em>, <em class="sig-param">forward_bulk_size=None</em>, <em class="sig-param">backward_bulk_size=None</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CosineEmbeddingLoss.hybridize" title="Permalink to this definition"></a></dt>
<dd><p>Activates or deactivates <code class="xref py py-class docutils literal notranslate"><span class="pre">HybridBlock</span></code> s recursively. Has no effect on
non-hybrid children.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>active</strong> (<em>bool</em><em>, </em><em>default True</em>) – Whether to turn hybrid on or off.</p></li>
<li><p><strong>partition_if_dynamic</strong> (<em>bool</em><em>, </em><em>default False</em>) – whether to partition the graph when dynamic shape op exists</p></li>
<li><p><strong>static_alloc</strong> (<em>bool</em><em>, </em><em>default False</em>) – Statically allocate memory to improve speed. Memory usage may increase.</p></li>
<li><p><strong>static_shape</strong> (<em>bool</em><em>, </em><em>default False</em>) – Optimize for invariant input shapes between iterations. Must also
set static_alloc to True. Change of input shapes is still allowed
but slower.</p></li>
<li><p><strong>inline_limit</strong> (<em>optional int</em><em>, </em><em>default 2</em>) – Maximum number of operators that can be inlined.</p></li>
<li><p><strong>forward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during forward pass.</p></li>
<li><p><strong>backward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during backward pass.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CosineEmbeddingLoss.infer_shape">
<code class="sig-name descname">infer_shape</code><span class="sig-paren">(</span><em class="sig-param">*args</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CosineEmbeddingLoss.infer_shape" title="Permalink to this definition"></a></dt>
<dd><p>Infers shape of Parameters from inputs.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CosineEmbeddingLoss.infer_type">
<code class="sig-name descname">infer_type</code><span class="sig-paren">(</span><em class="sig-param">*args</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CosineEmbeddingLoss.infer_type" title="Permalink to this definition"></a></dt>
<dd><p>Infers data type of Parameters from inputs.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CosineEmbeddingLoss.initialize">
<code class="sig-name descname">initialize</code><span class="sig-paren">(</span><em class="sig-param">init=&lt;mxnet.initializer.Uniform object&gt;</em>, <em class="sig-param">device=None</em>, <em class="sig-param">verbose=False</em>, <em class="sig-param">force_reinit=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CosineEmbeddingLoss.initialize" title="Permalink to this definition"></a></dt>
<dd><p>Initializes <code class="xref py py-class docutils literal notranslate"><span class="pre">Parameter</span></code> s of this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and its children.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>init</strong> (<a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Global default Initializer to be used when <code class="xref py py-meth docutils literal notranslate"><span class="pre">Parameter.init()</span></code> is <code class="docutils literal notranslate"><span class="pre">None</span></code>.
Otherwise, <code class="xref py py-meth docutils literal notranslate"><span class="pre">Parameter.init()</span></code> takes precedence.</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em> or </em><em>list of Device</em>) – Keeps a copy of Parameters on one or many device(s).</p></li>
<li><p><strong>verbose</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to verbosely print out details on initialization.</p></li>
<li><p><strong>force_reinit</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to force re-initialization if parameter is already initialized.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CosineEmbeddingLoss.load">
<code class="sig-name descname">load</code><span class="sig-paren">(</span><em class="sig-param">prefix</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CosineEmbeddingLoss.load" title="Permalink to this definition"></a></dt>
<dd><p>Load a model saved using the <cite>save</cite> API</p>
<p>Reconfigures a model using the saved configuration. This function
does not regenerate the model architecture. It resets each Block’s
parameter UUIDs as they were when saved in order to match the names of the
saved parameters.</p>
<p>This function assumes the Blocks in the model were created in the same
order they were when the model was saved. This is because each Block is
uniquely identified by Block class name and a unique ID in order (since
its an OrderedDict) and uses the unique ID to denote that specific Block.</p>
<p>Assumes that the model is created in an identical order every time.
If the model is not able to be recreated deterministically do not
use this set of APIs to save/load your model.</p>
<p>For HybridBlocks, the cached_graph (Symbol &amp; inputs) and settings are
restored if it had been hybridized before saving.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>prefix</strong> (<em>str</em>) – The prefix to use in filenames for loading this model:
&lt;prefix&gt;-model.json and &lt;prefix&gt;-model.params</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CosineEmbeddingLoss.load_dict">
<code class="sig-name descname">load_dict</code><span class="sig-paren">(</span><em class="sig-param">param_dict</em>, <em class="sig-param">device=None</em>, <em class="sig-param">allow_missing=False</em>, <em class="sig-param">ignore_extra=False</em>, <em class="sig-param">cast_dtype=False</em>, <em class="sig-param">dtype_source='current'</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CosineEmbeddingLoss.load_dict" title="Permalink to this definition"></a></dt>
<dd><p>Load parameters from dict</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>param_dict</strong> (<em>dict</em>) – Dictionary containing model parameters</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em>, </em><em>optional</em>) – Device context on which the memory is allocated. Default is
<cite>mxnet.device.current_device()</cite>.</p></li>
<li><p><strong>allow_missing</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently skip loading parameters not represented in the file.</p></li>
<li><p><strong>ignore_extra</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently ignore parameters from the file that are not
present in this dict.</p></li>
<li><p><strong>cast_dtype</strong> (<em>bool</em><em>, </em><em>default False</em>) – Cast the data type of the NDArray loaded from the checkpoint to the dtype
provided by the Parameter if any</p></li>
<li><p><strong>dtype_source</strong> (<em>str</em><em>, </em><em>default 'current'</em>) – must be in {‘current’, ‘saved’}
Only valid if cast_dtype=True, specify the source of the dtype for casting
the parameters</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CosineEmbeddingLoss.load_parameters">
<code class="sig-name descname">load_parameters</code><span class="sig-paren">(</span><em class="sig-param">filename</em>, <em class="sig-param">device=None</em>, <em class="sig-param">allow_missing=False</em>, <em class="sig-param">ignore_extra=False</em>, <em class="sig-param">cast_dtype=False</em>, <em class="sig-param">dtype_source='current'</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CosineEmbeddingLoss.load_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Load parameters from file previously saved by <cite>save_parameters</cite>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>filename</strong> (<em>str</em>) – Path to parameter file.</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em> or </em><em>list of Device</em><em>, </em><em>default cpu</em><em>(</em><em>)</em>) – Device(s) to initialize loaded parameters on.</p></li>
<li><p><strong>allow_missing</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently skip loading parameters not represents in the file.</p></li>
<li><p><strong>ignore_extra</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently ignore parameters from the file that are not
present in this Block.</p></li>
<li><p><strong>cast_dtype</strong> (<em>bool</em><em>, </em><em>default False</em>) – Cast the data type of the NDArray loaded from the checkpoint to the dtype
provided by the Parameter if any.</p></li>
<li><p><strong>dtype_source</strong> (<em>str</em><em>, </em><em>default 'current'</em>) – must be in {‘current’, ‘saved’}
Only valid if cast_dtype=True, specify the source of the dtype for casting
the parameters</p></li>
</ul>
</dd>
</dl>
<p class="rubric">References</p>
<p><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CosineEmbeddingLoss.optimize_for">
<code class="sig-name descname">optimize_for</code><span class="sig-paren">(</span><em class="sig-param">x</em>, <em class="sig-param">*args</em>, <em class="sig-param">backend=None</em>, <em class="sig-param">clear=False</em>, <em class="sig-param">partition_if_dynamic=True</em>, <em class="sig-param">static_alloc=False</em>, <em class="sig-param">static_shape=False</em>, <em class="sig-param">inline_limit=2</em>, <em class="sig-param">forward_bulk_size=None</em>, <em class="sig-param">backward_bulk_size=None</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CosineEmbeddingLoss.optimize_for" title="Permalink to this definition"></a></dt>
<dd><p>Partitions the current HybridBlock and optimizes it for a given backend
without executing a forward pass. Modifies the HybridBlock in-place.</p>
<p>Immediately partitions a HybridBlock using the specified backend. Combines
the work done in the hybridize API with part of the work done in the forward
pass without calling the CachedOp. Can be used in place of hybridize,
afterwards <cite>export</cite> can be called or inference can be run. See README.md in
example/extensions/lib_subgraph/README.md for more details.</p>
<p class="rubric">Examples</p>
<p># partition and then export to file
block.optimize_for(x, backend=’myPart’)
block.export(‘partitioned’)</p>
<p># partition and then run inference
block.optimize_for(x, backend=’myPart’)
block(x)</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>x</strong> (<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – first input to model</p></li>
<li><p><strong>*args</strong> (<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – other inputs to model</p></li>
<li><p><strong>backend</strong> (<em>str</em>) – The name of backend, as registered in <cite>SubgraphBackendRegistry</cite>, default None</p></li>
<li><p><strong>backend_opts</strong> (<em>dict of user-specified options to pass to the backend for partitioning</em><em>, </em><em>optional</em>) – Passed on to <cite>PrePartition</cite> and <cite>PostPartition</cite> functions of <cite>SubgraphProperty</cite></p></li>
<li><p><strong>clear</strong> (<em>bool</em><em>, </em><em>default False</em>) – clears any previous optimizations</p></li>
<li><p><strong>partition_if_dynamic</strong> (<em>bool</em><em>, </em><em>default False</em>) – whether to partition the graph when dynamic shape op exists</p></li>
<li><p><strong>static_alloc</strong> (<em>bool</em><em>, </em><em>default False</em>) – Statically allocate memory to improve speed. Memory usage may increase.</p></li>
<li><p><strong>static_shape</strong> (<em>bool</em><em>, </em><em>default False</em>) – Optimize for invariant input shapes between iterations. Must also
set static_alloc to True. Change of input shapes is still allowed
but slower.</p></li>
<li><p><strong>inline_limit</strong> (<em>optional int</em><em>, </em><em>default 2</em>) – Maximum number of operators that can be inlined.</p></li>
<li><p><strong>forward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during forward pass.</p></li>
<li><p><strong>backward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during backward pass.</p></li>
<li><p><strong>**kwargs</strong> (<em>The backend options</em><em>, </em><em>optional</em>) – Passed on to <cite>PrePartition</cite> and <cite>PostPartition</cite> functions of <cite>SubgraphProperty</cite></p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CosineEmbeddingLoss.params">
<em class="property">property </em><code class="sig-name descname">params</code><a class="headerlink" href="#mxnet.gluon.loss.CosineEmbeddingLoss.params" title="Permalink to this definition"></a></dt>
<dd><p>Returns this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code>’s parameter dictionary (does not include its
children’s parameters).</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CosineEmbeddingLoss.register_forward_hook">
<code class="sig-name descname">register_forward_hook</code><span class="sig-paren">(</span><em class="sig-param">hook</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CosineEmbeddingLoss.register_forward_hook" title="Permalink to this definition"></a></dt>
<dd><p>Registers a forward hook on the block.</p>
<p>The hook function is called immediately after <a class="reference internal" href="#mxnet.gluon.loss.CosineEmbeddingLoss.forward" title="mxnet.gluon.loss.CosineEmbeddingLoss.forward"><code class="xref py py-func docutils literal notranslate"><span class="pre">forward()</span></code></a>.
It should not modify the input or output.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>hook</strong> (<em>callable</em>) – The forward hook function of form <cite>hook(block, input, output) -&gt; None</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.utils.HookHandle</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CosineEmbeddingLoss.register_forward_pre_hook">
<code class="sig-name descname">register_forward_pre_hook</code><span class="sig-paren">(</span><em class="sig-param">hook</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CosineEmbeddingLoss.register_forward_pre_hook" title="Permalink to this definition"></a></dt>
<dd><p>Registers a forward pre-hook on the block.</p>
<p>The hook function is called immediately before <a class="reference internal" href="#mxnet.gluon.loss.CosineEmbeddingLoss.forward" title="mxnet.gluon.loss.CosineEmbeddingLoss.forward"><code class="xref py py-func docutils literal notranslate"><span class="pre">forward()</span></code></a>.
It should not modify the input or output.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>hook</strong> (<em>callable</em>) – The forward hook function of form <cite>hook(block, input) -&gt; None</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.utils.HookHandle</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CosineEmbeddingLoss.register_op_hook">
<code class="sig-name descname">register_op_hook</code><span class="sig-paren">(</span><em class="sig-param">callback</em>, <em class="sig-param">monitor_all=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CosineEmbeddingLoss.register_op_hook" title="Permalink to this definition"></a></dt>
<dd><p>Install op hook for block recursively.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>callback</strong> (<em>function</em>) – Function called to inspect the values of the intermediate outputs
of blocks after hybridization. It takes 3 parameters:
name of the tensor being inspected (str)
name of the operator producing or consuming that tensor (str)
tensor being inspected (NDArray).</p></li>
<li><p><strong>monitor_all</strong> (<em>bool</em><em>, </em><em>default False</em>) – If True, monitor both input and output, otherwise monitor output only.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CosineEmbeddingLoss.reset_ctx">
<code class="sig-name descname">reset_ctx</code><span class="sig-paren">(</span><em class="sig-param">ctx</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CosineEmbeddingLoss.reset_ctx" title="Permalink to this definition"></a></dt>
<dd><p>This function has been deprecated. Please refer to <code class="docutils literal notranslate"><span class="pre">HybridBlock.reset_device</span></code>.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CosineEmbeddingLoss.reset_device">
<code class="sig-name descname">reset_device</code><span class="sig-paren">(</span><em class="sig-param">device</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CosineEmbeddingLoss.reset_device" title="Permalink to this definition"></a></dt>
<dd><p>Re-assign all Parameters to other devices. If the Block is hybridized, it will reset the _cached_op_args.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>device</strong> (Device or list of Device, default <code class="xref py py-meth docutils literal notranslate"><span class="pre">device.current_device()</span></code>.) – Assign Parameter to given device. If device is a list of Device, a
copy will be made for each device.</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CosineEmbeddingLoss.save">
<code class="sig-name descname">save</code><span class="sig-paren">(</span><em class="sig-param">prefix</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CosineEmbeddingLoss.save" title="Permalink to this definition"></a></dt>
<dd><p>Save the model architecture and parameters to load again later</p>
<p>Saves the model architecture as a nested dictionary where each Block
in the model is a dictionary and its children are sub-dictionaries.</p>
<p>Each Block is uniquely identified by Block class name and a unique ID.
We save each Block’s parameter UUID to restore later in order to match
the saved parameters.</p>
<p>Recursively traverses a Block’s children in order (since its an
OrderedDict) and uses the unique ID to denote that specific Block.</p>
<p>Assumes that the model is created in an identical order every time.
If the model is not able to be recreated deterministically do not
use this set of APIs to save/load your model.</p>
<p>For HybridBlocks, the cached_graph is saved (Symbol &amp; inputs) if
it has already been hybridized.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>prefix</strong> (<em>str</em>) – The prefix to use in filenames for saving this model:
&lt;prefix&gt;-model.json and &lt;prefix&gt;-model.params</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CosineEmbeddingLoss.save_parameters">
<code class="sig-name descname">save_parameters</code><span class="sig-paren">(</span><em class="sig-param">filename</em>, <em class="sig-param">deduplicate=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CosineEmbeddingLoss.save_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Save parameters to file.</p>
<p>Saved parameters can only be loaded with <cite>load_parameters</cite>. Note that this
method only saves parameters, not model structure. If you want to save
model structures, please use <code class="xref py py-meth docutils literal notranslate"><span class="pre">HybridBlock.export()</span></code>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>filename</strong> (<em>str</em>) – Path to file.</p></li>
<li><p><strong>deduplicate</strong> (<em>bool</em><em>, </em><em>default False</em>) – If True, save shared parameters only once. Otherwise, if a Block
contains multiple sub-blocks that share parameters, each of the
shared parameters will be separately saved for every sub-block.</p></li>
</ul>
</dd>
</dl>
<p class="rubric">References</p>
<p><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CosineEmbeddingLoss.setattr">
<code class="sig-name descname">setattr</code><span class="sig-paren">(</span><em class="sig-param">name</em>, <em class="sig-param">value</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CosineEmbeddingLoss.setattr" title="Permalink to this definition"></a></dt>
<dd><p>Set an attribute to a new value for all Parameters.</p>
<p>For example, set grad_req to null if you don’t need gradient w.r.t a
model’s Parameters:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">setattr</span><span class="p">(</span><span class="s1">&#39;grad_req&#39;</span><span class="p">,</span> <span class="s1">&#39;null&#39;</span><span class="p">)</span>
</pre></div>
</div>
<p>or change the learning rate multiplier:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">setattr</span><span class="p">(</span><span class="s1">&#39;lr_mult&#39;</span><span class="p">,</span> <span class="mf">0.5</span><span class="p">)</span>
</pre></div>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>name</strong> (<em>str</em>) – Name of the attribute.</p></li>
<li><p><strong>value</strong> (<em>valid type for attribute name</em>) – The new value for the attribute.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CosineEmbeddingLoss.share_parameters">
<code class="sig-name descname">share_parameters</code><span class="sig-paren">(</span><em class="sig-param">shared</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CosineEmbeddingLoss.share_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Share parameters recursively inside the model.</p>
<p>For example, if you want <code class="docutils literal notranslate"><span class="pre">dense1</span></code> to share <code class="docutils literal notranslate"><span class="pre">dense0</span></code>’s weights, you can do:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">dense0</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">)</span>
<span class="n">dense1</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">)</span>
<span class="n">dense1</span><span class="o">.</span><span class="n">share_parameters</span><span class="p">(</span><span class="n">dense0</span><span class="o">.</span><span class="n">collect_params</span><span class="p">())</span>
</pre></div>
</div>
<dl class="simple">
<dt>which equals to</dt><dd><p>dense1.weight = dense0.weight
dense1.bias = dense0.bias</p>
</dd>
</dl>
<p>Note that unlike the <cite>load_parameters</cite> or <cite>load_dict</cite> functions,
<cite>share_parameters</cite> results in the <cite>Parameter</cite> object being shared (or
tied) between the models, whereas <cite>load_parameters</cite> or <cite>load_dict</cite> only
set the value of the data dictionary of a model. If you call
<cite>load_parameters</cite> or <cite>load_dict</cite> after <cite>share_parameters</cite>, the loaded
value will be reflected in all networks that use the shared (or tied)
<cite>Parameter</cite> object.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>shared</strong> (<em>Dict</em>) – Dict of the shared parameters.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>this block</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CosineEmbeddingLoss.summary">
<code class="sig-name descname">summary</code><span class="sig-paren">(</span><em class="sig-param">*inputs</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CosineEmbeddingLoss.summary" title="Permalink to this definition"></a></dt>
<dd><p>Print the summary of the model’s output and parameters.</p>
<p>The network must have been initialized, and must not have been hybridized.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>inputs</strong> (<em>object</em>) – Any input that the model supports. For any tensor in the input, only
<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.ndarray.NDArray</span></code></a> is supported.</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.CosineEmbeddingLoss.zero_grad">
<code class="sig-name descname">zero_grad</code><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.CosineEmbeddingLoss.zero_grad" title="Permalink to this definition"></a></dt>
<dd><p>Sets all Parameters’ gradient buffer to 0.</p>
</dd></dl>
</dd></dl>
<dl class="class">
<dt id="mxnet.gluon.loss.SDMLLoss">
<em class="property">class </em><code class="sig-name descname">SDMLLoss</code><span class="sig-paren">(</span><em class="sig-param">smoothing_parameter=0.3</em>, <em class="sig-param">weight=1.0</em>, <em class="sig-param">batch_axis=0</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#SDMLLoss"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.SDMLLoss" title="Permalink to this definition"></a></dt>
<dd><p>Bases: <a class="reference internal" href="#mxnet.gluon.loss.Loss" title="mxnet.gluon.loss.Loss"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.loss.Loss</span></code></a></p>
<p>Calculates Batchwise Smoothed Deep Metric Learning (SDML) Loss given two input tensors and a smoothing weight
SDM Loss learns similarity between paired samples by using unpaired samples in the minibatch
as potential negative examples.</p>
<p>The loss is described in greater detail in
“Large Scale Question Paraphrase Retrieval with Smoothed Deep Metric Learning.”
- by Bonadiman, Daniele, Anjishnu Kumar, and Arpit Mittal. arXiv preprint arXiv:1905.12786 (2019).
URL: https://arxiv.org/pdf/1905.12786.pdf</p>
<p>According to the authors, this loss formulation achieves comparable or higher accuracy to
Triplet Loss but converges much faster.
The loss assumes that the items in both tensors in each minibatch
are aligned such that x1[0] corresponds to x2[0] and all other datapoints in the minibatch are unrelated.
<cite>x1</cite> and <cite>x2</cite> are minibatches of vectors.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>smoothing_parameter</strong> (<em>float</em>) – Probability mass to be distributed over the minibatch. Must be &lt; 1.0.</p></li>
<li><p><strong>weight</strong> (<em>float</em><em> or </em><em>None</em>) – Global scalar weight for loss.</p></li>
<li><p><strong>batch_axis</strong> (<em>int</em><em>, </em><em>default 0</em>) – The axis that represents mini-batch.</p></li>
<li><p><strong>Inputs</strong><ul>
<li><p><strong>x1</strong>: Minibatch of data points with shape (batch_size, vector_dim)</p></li>
<li><p><strong>x2</strong>: Minibatch of data points with shape (batch_size, vector_dim)
Each item in x2 is a positive sample for the same index in x1.
That is, x1[0] and x2[0] form a positive pair, x1[1] and x2[1] form a positive pair - and so on.
All data points in different rows should be decorrelated</p></li>
</ul>
</p></li>
<li><p><strong>Outputs</strong><ul>
<li><p><strong>loss</strong>: loss tensor with shape (batch_size,).</p></li>
</ul>
</p></li>
</ul>
</dd>
</dl>
<p><strong>Methods</strong></p>
<table class="longtable docutils align-default">
<colgroup>
<col style="width: 10%" />
<col style="width: 90%" />
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SDMLLoss.apply" title="mxnet.gluon.loss.SDMLLoss.apply"><code class="xref py py-obj docutils literal notranslate"><span class="pre">apply</span></code></a>(fn)</p></td>
<td><p>Applies <code class="docutils literal notranslate"><span class="pre">fn</span></code> recursively to every child block as well as self.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SDMLLoss.collect_params" title="mxnet.gluon.loss.SDMLLoss.collect_params"><code class="xref py py-obj docutils literal notranslate"><span class="pre">collect_params</span></code></a>([select])</p></td>
<td><p>Returns a <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> containing this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and all of its children’s Parameters(default), also can returns the select <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> which match some given regular expressions.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SDMLLoss.export" title="mxnet.gluon.loss.SDMLLoss.export"><code class="xref py py-obj docutils literal notranslate"><span class="pre">export</span></code></a>(path[, epoch, remove_amp_cast])</p></td>
<td><p>Export HybridBlock to json format that can be loaded by <cite>gluon.SymbolBlock.imports</cite> or the C++ interface.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SDMLLoss.forward" title="mxnet.gluon.loss.SDMLLoss.forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">forward</span></code></a>(x1, x2)</p></td>
<td><p>the function computes the kl divergence between the negative distances</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SDMLLoss.hybridize" title="mxnet.gluon.loss.SDMLLoss.hybridize"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybridize</span></code></a>([active, partition_if_dynamic, …])</p></td>
<td><p>Activates or deactivates <code class="xref py py-class docutils literal notranslate"><span class="pre">HybridBlock</span></code> s recursively.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SDMLLoss.infer_shape" title="mxnet.gluon.loss.SDMLLoss.infer_shape"><code class="xref py py-obj docutils literal notranslate"><span class="pre">infer_shape</span></code></a>(*args)</p></td>
<td><p>Infers shape of Parameters from inputs.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SDMLLoss.infer_type" title="mxnet.gluon.loss.SDMLLoss.infer_type"><code class="xref py py-obj docutils literal notranslate"><span class="pre">infer_type</span></code></a>(*args)</p></td>
<td><p>Infers data type of Parameters from inputs.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SDMLLoss.initialize" title="mxnet.gluon.loss.SDMLLoss.initialize"><code class="xref py py-obj docutils literal notranslate"><span class="pre">initialize</span></code></a>([init, device, verbose, force_reinit])</p></td>
<td><p>Initializes <code class="xref py py-class docutils literal notranslate"><span class="pre">Parameter</span></code> s of this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and its children.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SDMLLoss.load" title="mxnet.gluon.loss.SDMLLoss.load"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load</span></code></a>(prefix)</p></td>
<td><p>Load a model saved using the <cite>save</cite> API</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SDMLLoss.load_dict" title="mxnet.gluon.loss.SDMLLoss.load_dict"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load_dict</span></code></a>(param_dict[, device, …])</p></td>
<td><p>Load parameters from dict</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SDMLLoss.load_parameters" title="mxnet.gluon.loss.SDMLLoss.load_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">load_parameters</span></code></a>(filename[, device, …])</p></td>
<td><p>Load parameters from file previously saved by <cite>save_parameters</cite>.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SDMLLoss.optimize_for" title="mxnet.gluon.loss.SDMLLoss.optimize_for"><code class="xref py py-obj docutils literal notranslate"><span class="pre">optimize_for</span></code></a>(x, *args[, backend, clear, …])</p></td>
<td><p>Partitions the current HybridBlock and optimizes it for a given backend without executing a forward pass.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SDMLLoss.register_forward_hook" title="mxnet.gluon.loss.SDMLLoss.register_forward_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_forward_hook</span></code></a>(hook)</p></td>
<td><p>Registers a forward hook on the block.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SDMLLoss.register_forward_pre_hook" title="mxnet.gluon.loss.SDMLLoss.register_forward_pre_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_forward_pre_hook</span></code></a>(hook)</p></td>
<td><p>Registers a forward pre-hook on the block.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SDMLLoss.register_op_hook" title="mxnet.gluon.loss.SDMLLoss.register_op_hook"><code class="xref py py-obj docutils literal notranslate"><span class="pre">register_op_hook</span></code></a>(callback[, monitor_all])</p></td>
<td><p>Install op hook for block recursively.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SDMLLoss.reset_ctx" title="mxnet.gluon.loss.SDMLLoss.reset_ctx"><code class="xref py py-obj docutils literal notranslate"><span class="pre">reset_ctx</span></code></a>(ctx)</p></td>
<td><p>This function has been deprecated.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SDMLLoss.reset_device" title="mxnet.gluon.loss.SDMLLoss.reset_device"><code class="xref py py-obj docutils literal notranslate"><span class="pre">reset_device</span></code></a>(device)</p></td>
<td><p>Re-assign all Parameters to other devices.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SDMLLoss.save" title="mxnet.gluon.loss.SDMLLoss.save"><code class="xref py py-obj docutils literal notranslate"><span class="pre">save</span></code></a>(prefix)</p></td>
<td><p>Save the model architecture and parameters to load again later</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SDMLLoss.save_parameters" title="mxnet.gluon.loss.SDMLLoss.save_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">save_parameters</span></code></a>(filename[, deduplicate])</p></td>
<td><p>Save parameters to file.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SDMLLoss.setattr" title="mxnet.gluon.loss.SDMLLoss.setattr"><code class="xref py py-obj docutils literal notranslate"><span class="pre">setattr</span></code></a>(name, value)</p></td>
<td><p>Set an attribute to a new value for all Parameters.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SDMLLoss.share_parameters" title="mxnet.gluon.loss.SDMLLoss.share_parameters"><code class="xref py py-obj docutils literal notranslate"><span class="pre">share_parameters</span></code></a>(shared)</p></td>
<td><p>Share parameters recursively inside the model.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SDMLLoss.summary" title="mxnet.gluon.loss.SDMLLoss.summary"><code class="xref py py-obj docutils literal notranslate"><span class="pre">summary</span></code></a>(*inputs)</p></td>
<td><p>Print the summary of the model’s output and parameters.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SDMLLoss.zero_grad" title="mxnet.gluon.loss.SDMLLoss.zero_grad"><code class="xref py py-obj docutils literal notranslate"><span class="pre">zero_grad</span></code></a>()</p></td>
<td><p>Sets all Parameters’ gradient buffer to 0.</p></td>
</tr>
</tbody>
</table>
<p><strong>Attributes</strong></p>
<table class="longtable docutils align-default">
<colgroup>
<col style="width: 10%" />
<col style="width: 90%" />
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SDMLLoss.params" title="mxnet.gluon.loss.SDMLLoss.params"><code class="xref py py-obj docutils literal notranslate"><span class="pre">params</span></code></a></p></td>
<td><p>Returns this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code>’s parameter dictionary (does not include its children’s parameters).</p></td>
</tr>
</tbody>
</table>
<dl class="method">
<dt id="mxnet.gluon.loss.SDMLLoss.apply">
<code class="sig-name descname">apply</code><span class="sig-paren">(</span><em class="sig-param">fn</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SDMLLoss.apply" title="Permalink to this definition"></a></dt>
<dd><p>Applies <code class="docutils literal notranslate"><span class="pre">fn</span></code> recursively to every child block as well as self.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>fn</strong> (<em>callable</em>) – Function to be applied to each submodule, of form <cite>fn(block)</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>this block</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SDMLLoss.collect_params">
<code class="sig-name descname">collect_params</code><span class="sig-paren">(</span><em class="sig-param">select=None</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SDMLLoss.collect_params" title="Permalink to this definition"></a></dt>
<dd><p>Returns a <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code> containing this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and all of its
children’s Parameters(default), also can returns the select <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code>
which match some given regular expressions.</p>
<p>For example, collect the specified parameters in [‘conv1.weight’, ‘conv1.bias’, ‘fc.weight’,
‘fc.bias’]:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">collect_params</span><span class="p">(</span><span class="s1">&#39;conv1.weight|conv1.bias|fc.weight|fc.bias&#39;</span><span class="p">)</span>
</pre></div>
</div>
<p>or collect all parameters whose names end with ‘weight’ or ‘bias’, this can be done
using regular expressions:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">collect_params</span><span class="p">(</span><span class="s1">&#39;.*weight|.*bias&#39;</span><span class="p">)</span>
</pre></div>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>select</strong> (<em>str</em>) – regular expressions</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>The selected <code class="xref py py-class docutils literal notranslate"><span class="pre">Dict</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SDMLLoss.export">
<code class="sig-name descname">export</code><span class="sig-paren">(</span><em class="sig-param">path</em>, <em class="sig-param">epoch=0</em>, <em class="sig-param">remove_amp_cast=True</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SDMLLoss.export" title="Permalink to this definition"></a></dt>
<dd><p>Export HybridBlock to json format that can be loaded by
<cite>gluon.SymbolBlock.imports</cite> or the C++ interface.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>When there are only one input, it will have name <cite>data</cite>. When there
Are more than one inputs, they will be named as <cite>data0</cite>, <cite>data1</cite>, etc.</p>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>path</strong> (<em>str</em><em> or </em><em>None</em>) – Path to save model. Two files <cite>path-symbol.json</cite> and <cite>path-xxxx.params</cite>
will be created, where xxxx is the 4 digits epoch number.
If None, do not export to file but return Python Symbol object and
corresponding dictionary of parameters.</p></li>
<li><p><strong>epoch</strong> (<em>int</em>) – Epoch number of saved model.</p></li>
<li><p><strong>remove_amp_cast</strong> (<em>bool</em><em>, </em><em>optional</em>) – Whether to remove the amp_cast and amp_multicast operators, before saving the model.</p></li>
</ul>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p><ul class="simple">
<li><p><strong>symbol_filename</strong> (<em>str</em>) – Filename to which model symbols were saved, including <cite>path</cite> prefix.</p></li>
<li><p><strong>params_filename</strong> (<em>str</em>) – Filename to which model parameters were saved, including <cite>path</cite> prefix.</p></li>
</ul>
</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SDMLLoss.forward">
<code class="sig-name descname">forward</code><span class="sig-paren">(</span><em class="sig-param">x1</em>, <em class="sig-param">x2</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#SDMLLoss.forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.SDMLLoss.forward" title="Permalink to this definition"></a></dt>
<dd><p>the function computes the kl divergence between the negative distances
(internally it compute a softmax casting into probabilities) and the
identity matrix.</p>
<p>This assumes that the two batches are aligned therefore the more similar
vector should be the one having the same id.</p>
<p>Batch1 Batch2</p>
<p>President of France French President
President of US American President</p>
<p>Given the question president of France in batch 1 the model will
learn to predict french president comparing it with all the other
vectors in batch 2</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SDMLLoss.hybridize">
<code class="sig-name descname">hybridize</code><span class="sig-paren">(</span><em class="sig-param">active=True</em>, <em class="sig-param">partition_if_dynamic=True</em>, <em class="sig-param">static_alloc=False</em>, <em class="sig-param">static_shape=False</em>, <em class="sig-param">inline_limit=2</em>, <em class="sig-param">forward_bulk_size=None</em>, <em class="sig-param">backward_bulk_size=None</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SDMLLoss.hybridize" title="Permalink to this definition"></a></dt>
<dd><p>Activates or deactivates <code class="xref py py-class docutils literal notranslate"><span class="pre">HybridBlock</span></code> s recursively. Has no effect on
non-hybrid children.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>active</strong> (<em>bool</em><em>, </em><em>default True</em>) – Whether to turn hybrid on or off.</p></li>
<li><p><strong>partition_if_dynamic</strong> (<em>bool</em><em>, </em><em>default False</em>) – whether to partition the graph when dynamic shape op exists</p></li>
<li><p><strong>static_alloc</strong> (<em>bool</em><em>, </em><em>default False</em>) – Statically allocate memory to improve speed. Memory usage may increase.</p></li>
<li><p><strong>static_shape</strong> (<em>bool</em><em>, </em><em>default False</em>) – Optimize for invariant input shapes between iterations. Must also
set static_alloc to True. Change of input shapes is still allowed
but slower.</p></li>
<li><p><strong>inline_limit</strong> (<em>optional int</em><em>, </em><em>default 2</em>) – Maximum number of operators that can be inlined.</p></li>
<li><p><strong>forward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during forward pass.</p></li>
<li><p><strong>backward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during backward pass.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SDMLLoss.infer_shape">
<code class="sig-name descname">infer_shape</code><span class="sig-paren">(</span><em class="sig-param">*args</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SDMLLoss.infer_shape" title="Permalink to this definition"></a></dt>
<dd><p>Infers shape of Parameters from inputs.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SDMLLoss.infer_type">
<code class="sig-name descname">infer_type</code><span class="sig-paren">(</span><em class="sig-param">*args</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SDMLLoss.infer_type" title="Permalink to this definition"></a></dt>
<dd><p>Infers data type of Parameters from inputs.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SDMLLoss.initialize">
<code class="sig-name descname">initialize</code><span class="sig-paren">(</span><em class="sig-param">init=&lt;mxnet.initializer.Uniform object&gt;</em>, <em class="sig-param">device=None</em>, <em class="sig-param">verbose=False</em>, <em class="sig-param">force_reinit=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SDMLLoss.initialize" title="Permalink to this definition"></a></dt>
<dd><p>Initializes <code class="xref py py-class docutils literal notranslate"><span class="pre">Parameter</span></code> s of this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code> and its children.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>init</strong> (<a class="reference internal" href="../../initializer/index.html#mxnet.initializer.Initializer" title="mxnet.initializer.Initializer"><em>Initializer</em></a>) – Global default Initializer to be used when <code class="xref py py-meth docutils literal notranslate"><span class="pre">Parameter.init()</span></code> is <code class="docutils literal notranslate"><span class="pre">None</span></code>.
Otherwise, <code class="xref py py-meth docutils literal notranslate"><span class="pre">Parameter.init()</span></code> takes precedence.</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em> or </em><em>list of Device</em>) – Keeps a copy of Parameters on one or many device(s).</p></li>
<li><p><strong>verbose</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to verbosely print out details on initialization.</p></li>
<li><p><strong>force_reinit</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to force re-initialization if parameter is already initialized.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SDMLLoss.load">
<code class="sig-name descname">load</code><span class="sig-paren">(</span><em class="sig-param">prefix</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SDMLLoss.load" title="Permalink to this definition"></a></dt>
<dd><p>Load a model saved using the <cite>save</cite> API</p>
<p>Reconfigures a model using the saved configuration. This function
does not regenerate the model architecture. It resets each Block’s
parameter UUIDs as they were when saved in order to match the names of the
saved parameters.</p>
<p>This function assumes the Blocks in the model were created in the same
order they were when the model was saved. This is because each Block is
uniquely identified by Block class name and a unique ID in order (since
its an OrderedDict) and uses the unique ID to denote that specific Block.</p>
<p>Assumes that the model is created in an identical order every time.
If the model is not able to be recreated deterministically do not
use this set of APIs to save/load your model.</p>
<p>For HybridBlocks, the cached_graph (Symbol &amp; inputs) and settings are
restored if it had been hybridized before saving.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>prefix</strong> (<em>str</em>) – The prefix to use in filenames for loading this model:
&lt;prefix&gt;-model.json and &lt;prefix&gt;-model.params</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SDMLLoss.load_dict">
<code class="sig-name descname">load_dict</code><span class="sig-paren">(</span><em class="sig-param">param_dict</em>, <em class="sig-param">device=None</em>, <em class="sig-param">allow_missing=False</em>, <em class="sig-param">ignore_extra=False</em>, <em class="sig-param">cast_dtype=False</em>, <em class="sig-param">dtype_source='current'</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SDMLLoss.load_dict" title="Permalink to this definition"></a></dt>
<dd><p>Load parameters from dict</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>param_dict</strong> (<em>dict</em>) – Dictionary containing model parameters</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em>, </em><em>optional</em>) – Device context on which the memory is allocated. Default is
<cite>mxnet.device.current_device()</cite>.</p></li>
<li><p><strong>allow_missing</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently skip loading parameters not represented in the file.</p></li>
<li><p><strong>ignore_extra</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently ignore parameters from the file that are not
present in this dict.</p></li>
<li><p><strong>cast_dtype</strong> (<em>bool</em><em>, </em><em>default False</em>) – Cast the data type of the NDArray loaded from the checkpoint to the dtype
provided by the Parameter if any</p></li>
<li><p><strong>dtype_source</strong> (<em>str</em><em>, </em><em>default 'current'</em>) – must be in {‘current’, ‘saved’}
Only valid if cast_dtype=True, specify the source of the dtype for casting
the parameters</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SDMLLoss.load_parameters">
<code class="sig-name descname">load_parameters</code><span class="sig-paren">(</span><em class="sig-param">filename</em>, <em class="sig-param">device=None</em>, <em class="sig-param">allow_missing=False</em>, <em class="sig-param">ignore_extra=False</em>, <em class="sig-param">cast_dtype=False</em>, <em class="sig-param">dtype_source='current'</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SDMLLoss.load_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Load parameters from file previously saved by <cite>save_parameters</cite>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>filename</strong> (<em>str</em>) – Path to parameter file.</p></li>
<li><p><strong>device</strong> (<a class="reference internal" href="../../device/index.html#mxnet.device.Device" title="mxnet.device.Device"><em>Device</em></a><em> or </em><em>list of Device</em><em>, </em><em>default cpu</em><em>(</em><em>)</em>) – Device(s) to initialize loaded parameters on.</p></li>
<li><p><strong>allow_missing</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently skip loading parameters not represents in the file.</p></li>
<li><p><strong>ignore_extra</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether to silently ignore parameters from the file that are not
present in this Block.</p></li>
<li><p><strong>cast_dtype</strong> (<em>bool</em><em>, </em><em>default False</em>) – Cast the data type of the NDArray loaded from the checkpoint to the dtype
provided by the Parameter if any.</p></li>
<li><p><strong>dtype_source</strong> (<em>str</em><em>, </em><em>default 'current'</em>) – must be in {‘current’, ‘saved’}
Only valid if cast_dtype=True, specify the source of the dtype for casting
the parameters</p></li>
</ul>
</dd>
</dl>
<p class="rubric">References</p>
<p><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SDMLLoss.optimize_for">
<code class="sig-name descname">optimize_for</code><span class="sig-paren">(</span><em class="sig-param">x</em>, <em class="sig-param">*args</em>, <em class="sig-param">backend=None</em>, <em class="sig-param">clear=False</em>, <em class="sig-param">partition_if_dynamic=True</em>, <em class="sig-param">static_alloc=False</em>, <em class="sig-param">static_shape=False</em>, <em class="sig-param">inline_limit=2</em>, <em class="sig-param">forward_bulk_size=None</em>, <em class="sig-param">backward_bulk_size=None</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SDMLLoss.optimize_for" title="Permalink to this definition"></a></dt>
<dd><p>Partitions the current HybridBlock and optimizes it for a given backend
without executing a forward pass. Modifies the HybridBlock in-place.</p>
<p>Immediately partitions a HybridBlock using the specified backend. Combines
the work done in the hybridize API with part of the work done in the forward
pass without calling the CachedOp. Can be used in place of hybridize,
afterwards <cite>export</cite> can be called or inference can be run. See README.md in
example/extensions/lib_subgraph/README.md for more details.</p>
<p class="rubric">Examples</p>
<p># partition and then export to file
block.optimize_for(x, backend=’myPart’)
block.export(‘partitioned’)</p>
<p># partition and then run inference
block.optimize_for(x, backend=’myPart’)
block(x)</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>x</strong> (<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – first input to model</p></li>
<li><p><strong>*args</strong> (<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – other inputs to model</p></li>
<li><p><strong>backend</strong> (<em>str</em>) – The name of backend, as registered in <cite>SubgraphBackendRegistry</cite>, default None</p></li>
<li><p><strong>backend_opts</strong> (<em>dict of user-specified options to pass to the backend for partitioning</em><em>, </em><em>optional</em>) – Passed on to <cite>PrePartition</cite> and <cite>PostPartition</cite> functions of <cite>SubgraphProperty</cite></p></li>
<li><p><strong>clear</strong> (<em>bool</em><em>, </em><em>default False</em>) – clears any previous optimizations</p></li>
<li><p><strong>partition_if_dynamic</strong> (<em>bool</em><em>, </em><em>default False</em>) – whether to partition the graph when dynamic shape op exists</p></li>
<li><p><strong>static_alloc</strong> (<em>bool</em><em>, </em><em>default False</em>) – Statically allocate memory to improve speed. Memory usage may increase.</p></li>
<li><p><strong>static_shape</strong> (<em>bool</em><em>, </em><em>default False</em>) – Optimize for invariant input shapes between iterations. Must also
set static_alloc to True. Change of input shapes is still allowed
but slower.</p></li>
<li><p><strong>inline_limit</strong> (<em>optional int</em><em>, </em><em>default 2</em>) – Maximum number of operators that can be inlined.</p></li>
<li><p><strong>forward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during forward pass.</p></li>
<li><p><strong>backward_bulk_size</strong> (<em>optional int</em><em>, </em><em>default None</em>) – Segment size of bulk execution during backward pass.</p></li>
<li><p><strong>**kwargs</strong> (<em>The backend options</em><em>, </em><em>optional</em>) – Passed on to <cite>PrePartition</cite> and <cite>PostPartition</cite> functions of <cite>SubgraphProperty</cite></p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SDMLLoss.params">
<em class="property">property </em><code class="sig-name descname">params</code><a class="headerlink" href="#mxnet.gluon.loss.SDMLLoss.params" title="Permalink to this definition"></a></dt>
<dd><p>Returns this <code class="xref py py-class docutils literal notranslate"><span class="pre">Block</span></code>’s parameter dictionary (does not include its
children’s parameters).</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SDMLLoss.register_forward_hook">
<code class="sig-name descname">register_forward_hook</code><span class="sig-paren">(</span><em class="sig-param">hook</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SDMLLoss.register_forward_hook" title="Permalink to this definition"></a></dt>
<dd><p>Registers a forward hook on the block.</p>
<p>The hook function is called immediately after <a class="reference internal" href="#mxnet.gluon.loss.SDMLLoss.forward" title="mxnet.gluon.loss.SDMLLoss.forward"><code class="xref py py-func docutils literal notranslate"><span class="pre">forward()</span></code></a>.
It should not modify the input or output.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>hook</strong> (<em>callable</em>) – The forward hook function of form <cite>hook(block, input, output) -&gt; None</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.utils.HookHandle</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SDMLLoss.register_forward_pre_hook">
<code class="sig-name descname">register_forward_pre_hook</code><span class="sig-paren">(</span><em class="sig-param">hook</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SDMLLoss.register_forward_pre_hook" title="Permalink to this definition"></a></dt>
<dd><p>Registers a forward pre-hook on the block.</p>
<p>The hook function is called immediately before <a class="reference internal" href="#mxnet.gluon.loss.SDMLLoss.forward" title="mxnet.gluon.loss.SDMLLoss.forward"><code class="xref py py-func docutils literal notranslate"><span class="pre">forward()</span></code></a>.
It should not modify the input or output.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>hook</strong> (<em>callable</em>) – The forward hook function of form <cite>hook(block, input) -&gt; None</cite>.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.utils.HookHandle</span></code></p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SDMLLoss.register_op_hook">
<code class="sig-name descname">register_op_hook</code><span class="sig-paren">(</span><em class="sig-param">callback</em>, <em class="sig-param">monitor_all=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SDMLLoss.register_op_hook" title="Permalink to this definition"></a></dt>
<dd><p>Install op hook for block recursively.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>callback</strong> (<em>function</em>) – Function called to inspect the values of the intermediate outputs
of blocks after hybridization. It takes 3 parameters:
name of the tensor being inspected (str)
name of the operator producing or consuming that tensor (str)
tensor being inspected (NDArray).</p></li>
<li><p><strong>monitor_all</strong> (<em>bool</em><em>, </em><em>default False</em>) – If True, monitor both input and output, otherwise monitor output only.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SDMLLoss.reset_ctx">
<code class="sig-name descname">reset_ctx</code><span class="sig-paren">(</span><em class="sig-param">ctx</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SDMLLoss.reset_ctx" title="Permalink to this definition"></a></dt>
<dd><p>This function has been deprecated. Please refer to <code class="docutils literal notranslate"><span class="pre">HybridBlock.reset_device</span></code>.</p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SDMLLoss.reset_device">
<code class="sig-name descname">reset_device</code><span class="sig-paren">(</span><em class="sig-param">device</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SDMLLoss.reset_device" title="Permalink to this definition"></a></dt>
<dd><p>Re-assign all Parameters to other devices. If the Block is hybridized, it will reset the _cached_op_args.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>device</strong> (Device or list of Device, default <code class="xref py py-meth docutils literal notranslate"><span class="pre">device.current_device()</span></code>.) – Assign Parameter to given device. If device is a list of Device, a
copy will be made for each device.</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SDMLLoss.save">
<code class="sig-name descname">save</code><span class="sig-paren">(</span><em class="sig-param">prefix</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SDMLLoss.save" title="Permalink to this definition"></a></dt>
<dd><p>Save the model architecture and parameters to load again later</p>
<p>Saves the model architecture as a nested dictionary where each Block
in the model is a dictionary and its children are sub-dictionaries.</p>
<p>Each Block is uniquely identified by Block class name and a unique ID.
We save each Block’s parameter UUID to restore later in order to match
the saved parameters.</p>
<p>Recursively traverses a Block’s children in order (since its an
OrderedDict) and uses the unique ID to denote that specific Block.</p>
<p>Assumes that the model is created in an identical order every time.
If the model is not able to be recreated deterministically do not
use this set of APIs to save/load your model.</p>
<p>For HybridBlocks, the cached_graph is saved (Symbol &amp; inputs) if
it has already been hybridized.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>prefix</strong> (<em>str</em>) – The prefix to use in filenames for saving this model:
&lt;prefix&gt;-model.json and &lt;prefix&gt;-model.params</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SDMLLoss.save_parameters">
<code class="sig-name descname">save_parameters</code><span class="sig-paren">(</span><em class="sig-param">filename</em>, <em class="sig-param">deduplicate=False</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SDMLLoss.save_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Save parameters to file.</p>
<p>Saved parameters can only be loaded with <cite>load_parameters</cite>. Note that this
method only saves parameters, not model structure. If you want to save
model structures, please use <code class="xref py py-meth docutils literal notranslate"><span class="pre">HybridBlock.export()</span></code>.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>filename</strong> (<em>str</em>) – Path to file.</p></li>
<li><p><strong>deduplicate</strong> (<em>bool</em><em>, </em><em>default False</em>) – If True, save shared parameters only once. Otherwise, if a Block
contains multiple sub-blocks that share parameters, each of the
shared parameters will be separately saved for every sub-block.</p></li>
</ul>
</dd>
</dl>
<p class="rubric">References</p>
<p><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></p>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SDMLLoss.setattr">
<code class="sig-name descname">setattr</code><span class="sig-paren">(</span><em class="sig-param">name</em>, <em class="sig-param">value</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SDMLLoss.setattr" title="Permalink to this definition"></a></dt>
<dd><p>Set an attribute to a new value for all Parameters.</p>
<p>For example, set grad_req to null if you don’t need gradient w.r.t a
model’s Parameters:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">setattr</span><span class="p">(</span><span class="s1">&#39;grad_req&#39;</span><span class="p">,</span> <span class="s1">&#39;null&#39;</span><span class="p">)</span>
</pre></div>
</div>
<p>or change the learning rate multiplier:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">model</span><span class="o">.</span><span class="n">setattr</span><span class="p">(</span><span class="s1">&#39;lr_mult&#39;</span><span class="p">,</span> <span class="mf">0.5</span><span class="p">)</span>
</pre></div>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>name</strong> (<em>str</em>) – Name of the attribute.</p></li>
<li><p><strong>value</strong> (<em>valid type for attribute name</em>) – The new value for the attribute.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SDMLLoss.share_parameters">
<code class="sig-name descname">share_parameters</code><span class="sig-paren">(</span><em class="sig-param">shared</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SDMLLoss.share_parameters" title="Permalink to this definition"></a></dt>
<dd><p>Share parameters recursively inside the model.</p>
<p>For example, if you want <code class="docutils literal notranslate"><span class="pre">dense1</span></code> to share <code class="docutils literal notranslate"><span class="pre">dense0</span></code>’s weights, you can do:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">dense0</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">)</span>
<span class="n">dense1</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">20</span><span class="p">)</span>
<span class="n">dense1</span><span class="o">.</span><span class="n">share_parameters</span><span class="p">(</span><span class="n">dense0</span><span class="o">.</span><span class="n">collect_params</span><span class="p">())</span>
</pre></div>
</div>
<dl class="simple">
<dt>which equals to</dt><dd><p>dense1.weight = dense0.weight
dense1.bias = dense0.bias</p>
</dd>
</dl>
<p>Note that unlike the <cite>load_parameters</cite> or <cite>load_dict</cite> functions,
<cite>share_parameters</cite> results in the <cite>Parameter</cite> object being shared (or
tied) between the models, whereas <cite>load_parameters</cite> or <cite>load_dict</cite> only
set the value of the data dictionary of a model. If you call
<cite>load_parameters</cite> or <cite>load_dict</cite> after <cite>share_parameters</cite>, the loaded
value will be reflected in all networks that use the shared (or tied)
<cite>Parameter</cite> object.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>shared</strong> (<em>Dict</em>) – Dict of the shared parameters.</p>
</dd>
<dt class="field-even">Returns</dt>
<dd class="field-even"><p></p>
</dd>
<dt class="field-odd">Return type</dt>
<dd class="field-odd"><p>this block</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SDMLLoss.summary">
<code class="sig-name descname">summary</code><span class="sig-paren">(</span><em class="sig-param">*inputs</em><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SDMLLoss.summary" title="Permalink to this definition"></a></dt>
<dd><p>Print the summary of the model’s output and parameters.</p>
<p>The network must have been initialized, and must not have been hybridized.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><p><strong>inputs</strong> (<em>object</em>) – Any input that the model supports. For any tensor in the input, only
<a class="reference internal" href="../../legacy/ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.ndarray.NDArray</span></code></a> is supported.</p>
</dd>
</dl>
</dd></dl>
<dl class="method">
<dt id="mxnet.gluon.loss.SDMLLoss.zero_grad">
<code class="sig-name descname">zero_grad</code><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="headerlink" href="#mxnet.gluon.loss.SDMLLoss.zero_grad" title="Permalink to this definition"></a></dt>
<dd><p>Sets all Parameters’ gradient buffer to 0.</p>
</dd></dl>
</dd></dl>
</div>
<hr class="feedback-hr-top" />
<div class="feedback-container">
<div class="feedback-question">Did this page help you?</div>
<div class="feedback-answer-container">
<div class="feedback-answer yes-link" data-response="yes">Yes</div>
<div class="feedback-answer no-link" data-response="no">No</div>
</div>
<div class="feedback-thank-you">Thanks for your feedback!</div>
</div>
<hr class="feedback-hr-bottom" />
</div>
<div class="side-doc-outline">
<div class="side-doc-outline--content">
</div>
</div>
<div class="clearer"></div>
</div><div class="pagenation">
<a id="button-prev" href="../data/vision/transforms/index.html" class="mdl-button mdl-js-button mdl-js-ripple-effect mdl-button--colored" role="botton" accesskey="P">
<i class="pagenation-arrow-L fas fa-arrow-left fa-lg"></i>
<div class="pagenation-text">
<span class="pagenation-direction">Previous</span>
<div>vision.transforms</div>
</div>
</a>
<a id="button-next" href="../metric/index.html" class="mdl-button mdl-js-button mdl-js-ripple-effect mdl-button--colored" role="botton" accesskey="N">
<i class="pagenation-arrow-R fas fa-arrow-right fa-lg"></i>
<div class="pagenation-text">
<span class="pagenation-direction">Next</span>
<div>gluon.metric</div>
</div>
</a>
</div>
<footer class="site-footer h-card">
<div class="wrapper">
<div class="row">
<div class="col-4">
<h4 class="footer-category-title">Resources</h4>
<ul class="contact-list">
<li><a href="https://lists.apache.org/list.html?dev@mxnet.apache.org">Mailing list</a> <a class="u-email" href="mailto:dev-subscribe@mxnet.apache.org">(subscribe)</a></li>
<li><a href="https://discuss.mxnet.io">MXNet Discuss forum</a></li>
<li><a href="https://github.com/apache/mxnet/issues">Github Issues</a></li>
<li><a href="https://github.com/apache/mxnet/projects">Projects</a></li>
<li><a href="https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+Home">Developer Wiki</a></li>
<li><a href="/community">Contribute To MXNet</a></li>
</ul>
</div>
<div class="col-4"><ul class="social-media-list"><li><a href="https://github.com/apache/mxnet"><svg class="svg-icon"><use xlink:href="../../../_static/minima-social-icons.svg#github"></use></svg> <span class="username">apache/mxnet</span></a></li><li><a href="https://www.twitter.com/apachemxnet"><svg class="svg-icon"><use xlink:href="../../../_static/minima-social-icons.svg#twitter"></use></svg> <span class="username">apachemxnet</span></a></li><li><a href="https://youtube.com/apachemxnet"><svg class="svg-icon"><use xlink:href="../../../_static/minima-social-icons.svg#youtube"></use></svg> <span class="username">apachemxnet</span></a></li></ul>
</div>
<div class="col-4 footer-text">
<p>A flexible and efficient library for deep learning.</p>
</div>
</div>
</div>
</footer>
<footer class="site-footer2">
<div class="wrapper">
<div class="row">
<div class="col-3">
<img src="../../../_static/apache_incubator_logo.png" class="footer-logo col-2">
</div>
<div class="footer-bottom-warning col-9">
<p>Apache MXNet is an effort undergoing incubation at <a href="http://www.apache.org/">The Apache Software Foundation</a> (ASF), <span style="font-weight:bold">sponsored by the <i>Apache Incubator</i></span>. Incubation is required
of all newly accepted projects until a further review indicates that the infrastructure,
communications, and decision making process have stabilized in a manner consistent with other
successful ASF projects. While incubation status is not necessarily a reflection of the completeness
or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF.
</p><p>"Copyright © 2017-2018, The Apache Software Foundation Apache MXNet, MXNet, Apache, the Apache
feather, and the Apache MXNet project logo are either registered trademarks or trademarks of the
Apache Software Foundation."</p>
</div>
</div>
</div>
</footer>
</body>
</html>