| <!DOCTYPE html> |
| |
| <html xmlns="http://www.w3.org/1999/xhtml"> |
| <head> |
| <meta charset="utf-8" /> |
| <meta charset="utf-8"> |
| <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> |
| <meta http-equiv="x-ua-compatible" content="ie=edge"> |
| <style> |
| .dropdown { |
| position: relative; |
| display: inline-block; |
| } |
| |
| .dropdown-content { |
| display: none; |
| position: absolute; |
| background-color: #f9f9f9; |
| min-width: 160px; |
| box-shadow: 0px 8px 16px 0px rgba(0,0,0,0.2); |
| padding: 12px 16px; |
| z-index: 1; |
| text-align: left; |
| } |
| |
| .dropdown:hover .dropdown-content { |
| display: block; |
| } |
| |
| .dropdown-option:hover { |
| color: #FF4500 !important; |
| } |
| |
| .dropdown-option-active { |
| color: #FF4500; |
| font-weight: lighter; |
| } |
| |
| .dropdown-option { |
| color: #000000; |
| font-weight: lighter; |
| } |
| |
| .dropdown-header { |
| color: #FFFFFF; |
| display: inline-flex; |
| } |
| |
| .dropdown-caret { |
| width: 18px; |
| } |
| |
| .dropdown-caret-path { |
| fill: #FFFFFF; |
| } |
| </style> |
| |
| <title>gluon.loss — Apache MXNet documentation</title> |
| |
| <link rel="stylesheet" href="../../../_static/basic.css" type="text/css" /> |
| <link rel="stylesheet" href="../../../_static/pygments.css" type="text/css" /> |
| <link rel="stylesheet" type="text/css" href="../../../_static/mxnet.css" /> |
| <link rel="stylesheet" href="../../../_static/material-design-lite-1.3.0/material.blue-deep_orange.min.css" type="text/css" /> |
| <link rel="stylesheet" href="../../../_static/sphinx_materialdesign_theme.css" type="text/css" /> |
| <link rel="stylesheet" href="../../../_static/fontawesome/all.css" type="text/css" /> |
| <link rel="stylesheet" href="../../../_static/fonts.css" type="text/css" /> |
| <script id="documentation_options" data-url_root="../../../" src="../../../_static/documentation_options.js"></script> |
| <script src="../../../_static/jquery.js"></script> |
| <script src="../../../_static/underscore.js"></script> |
| <script src="../../../_static/doctools.js"></script> |
| <script src="../../../_static/language_data.js"></script> |
| <script src="../../../_static/google_analytics.js"></script> |
| <script src="../../../_static/autodoc.js"></script> |
| <script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script> |
| <script async="async" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.5/latest.js?config=TeX-AMS-MML_HTMLorMML"></script> |
| <script type="text/x-mathjax-config">MathJax.Hub.Config({"tex2jax": {"inlineMath": [["$", "$"], ["\\(", "\\)"]], "processEscapes": true, "ignoreClass": "document", "processClass": "math|output_area"}})</script> |
| <link rel="shortcut icon" href="../../../_static/mxnet-icon.png"/> |
| <link rel="index" title="Index" href="../../../genindex.html" /> |
| <link rel="search" title="Search" href="../../../search.html" /> |
| <link rel="next" title="gluon.model_zoo.vision" href="../model_zoo/index.html" /> |
| <link rel="prev" title="vision.transforms" href="../data/vision/transforms/index.html" /> |
| </head> |
| <body><header class="site-header" role="banner"> |
| <div class="wrapper"> |
| <a class="site-title" rel="author" href="/versions/1.6.0/"><img |
| src="../../../_static/mxnet_logo.png" class="site-header-logo"></a> |
| <nav class="site-nav"> |
| <input type="checkbox" id="nav-trigger" class="nav-trigger"/> |
| <label for="nav-trigger"> |
| <span class="menu-icon"> |
| <svg viewBox="0 0 18 15" width="18px" height="15px"> |
| <path d="M18,1.484c0,0.82-0.665,1.484-1.484,1.484H1.484C0.665,2.969,0,2.304,0,1.484l0,0C0,0.665,0.665,0,1.484,0 h15.032C17.335,0,18,0.665,18,1.484L18,1.484z M18,7.516C18,8.335,17.335,9,16.516,9H1.484C0.665,9,0,8.335,0,7.516l0,0 c0-0.82,0.665-1.484,1.484-1.484h15.032C17.335,6.031,18,6.696,18,7.516L18,7.516z M18,13.516C18,14.335,17.335,15,16.516,15H1.484 C0.665,15,0,14.335,0,13.516l0,0c0-0.82,0.665-1.483,1.484-1.483h15.032C17.335,12.031,18,12.695,18,13.516L18,13.516z"/> |
| </svg> |
| </span> |
| </label> |
| |
| <div class="trigger"> |
| <a class="page-link" href="/versions/1.6.0/get_started">Get Started</a> |
| <a class="page-link" href="/versions/1.6.0/blog">Blog</a> |
| <a class="page-link" href="/versions/1.6.0/features">Features</a> |
| <a class="page-link" href="/versions/1.6.0/ecosystem">Ecosystem</a> |
| <a class="page-link page-current" href="/versions/1.6.0/api">Docs & Tutorials</a> |
| <a class="page-link" href="https://github.com/apache/incubator-mxnet">GitHub</a> |
| <div class="dropdown"> |
| <span class="dropdown-header">1.6.0 |
| <svg class="dropdown-caret" viewBox="0 0 32 32" class="icon icon-caret-bottom" aria-hidden="true"><path class="dropdown-caret-path" d="M24 11.305l-7.997 11.39L8 11.305z"></path></svg> |
| </span> |
| <div class="dropdown-content"> |
| <a class="dropdown-option" href="/">master</a><br> |
| <a class="dropdown-option" href="/versions/1.7.0/">1.7.0</a><br> |
| <a class="dropdown-option-active" href="/versions/1.6.0/">1.6.0</a><br> |
| <a class="dropdown-option" href="/versions/1.5.0/">1.5.0</a><br> |
| <a class="dropdown-option" href="/versions/1.4.1/">1.4.1</a><br> |
| <a class="dropdown-option" href="/versions/1.3.1/">1.3.1</a><br> |
| <a class="dropdown-option" href="/versions/1.2.1/">1.2.1</a><br> |
| <a class="dropdown-option" href="/versions/1.1.0/">1.1.0</a><br> |
| <a class="dropdown-option" href="/versions/1.0.0/">1.0.0</a><br> |
| <a class="dropdown-option" href="/versions/0.12.1/">0.12.1</a><br> |
| <a class="dropdown-option" href="/versions/0.11.0/">0.11.0</a> |
| </div> |
| </div> |
| </div> |
| </nav> |
| </div> |
| </header> |
| <div class="mdl-layout mdl-js-layout mdl-layout--fixed-header mdl-layout--fixed-drawer"><header class="mdl-layout__header mdl-layout__header--waterfall "> |
| <div class="mdl-layout__header-row"> |
| |
| <nav class="mdl-navigation breadcrumb"> |
| <a class="mdl-navigation__link" href="../../index.html">Python API</a><i class="material-icons">navigate_next</i> |
| <a class="mdl-navigation__link" href="../index.html">mxnet.gluon</a><i class="material-icons">navigate_next</i> |
| <a class="mdl-navigation__link is-active">gluon.loss</a> |
| </nav> |
| <div class="mdl-layout-spacer"></div> |
| <nav class="mdl-navigation"> |
| |
| <form class="form-inline pull-sm-right" action="../../../search.html" method="get"> |
| <div class="mdl-textfield mdl-js-textfield mdl-textfield--expandable mdl-textfield--floating-label mdl-textfield--align-right"> |
| <label id="quick-search-icon" class="mdl-button mdl-js-button mdl-button--icon" for="waterfall-exp"> |
| <i class="material-icons">search</i> |
| </label> |
| <div class="mdl-textfield__expandable-holder"> |
| <input class="mdl-textfield__input" type="text" name="q" id="waterfall-exp" placeholder="Search" /> |
| <input type="hidden" name="check_keywords" value="yes" /> |
| <input type="hidden" name="area" value="default" /> |
| </div> |
| </div> |
| <div class="mdl-tooltip" data-mdl-for="quick-search-icon"> |
| Quick search |
| </div> |
| </form> |
| |
| <a id="button-show-source" |
| class="mdl-button mdl-js-button mdl-button--icon" |
| href="../../../_sources/api/gluon/loss/index.rst" rel="nofollow"> |
| <i class="material-icons">code</i> |
| </a> |
| <div class="mdl-tooltip" data-mdl-for="button-show-source"> |
| Show Source |
| </div> |
| </nav> |
| </div> |
| <div class="mdl-layout__header-row header-links"> |
| <div class="mdl-layout-spacer"></div> |
| <nav class="mdl-navigation"> |
| </nav> |
| </div> |
| </header><header class="mdl-layout__drawer"> |
| |
| <div class="globaltoc"> |
| <span class="mdl-layout-title toc">Table Of Contents</span> |
| |
| |
| |
| <nav class="mdl-navigation"> |
| <ul class="current"> |
| <li class="toctree-l1"><a class="reference internal" href="../../../tutorials/index.html">Python Tutorials</a><ul> |
| <li class="toctree-l2"><a class="reference internal" href="../../../tutorials/getting-started/index.html">Getting Started</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/index.html">Crash Course</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/1-ndarray.html">Manipulate data with <code class="docutils literal notranslate"><span class="pre">ndarray</span></code></a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/2-nn.html">Create a neural network</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/3-autograd.html">Automatic differentiation with <code class="docutils literal notranslate"><span class="pre">autograd</span></code></a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/4-train.html">Train the neural network</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/5-predict.html">Predict with a pre-trained model</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/6-use_gpus.html">Use GPUs</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/getting-started/to-mxnet/index.html">Moving to MXNet from Other Frameworks</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/to-mxnet/pytorch.html">PyTorch vs Apache MXNet</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/getting-started/gluon_from_experiment_to_deployment.html">Gluon: from experiment to deployment</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/getting-started/logistic_regression_explained.html">Logistic regression explained</a></li> |
| <li class="toctree-l3"><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/image/mnist.html">MNIST</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../../tutorials/packages/index.html">Packages</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/autograd/index.html">Automatic Differentiation</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/gluon/index.html">Gluon</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/index.html">Blocks</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/custom-layer.html">Custom Layers</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/custom_layer_beginners.html">Customer Layers (Beginners)</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/hybridize.html">Hybridize</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/init.html">Initialization</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/naming.html">Parameter and Block Naming</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/nn.html">Layers and Blocks</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/parameters.html">Parameter Management</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/activations/activations.html">Activation Blocks</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/image/index.html">Image Tutorials</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/image/image-augmentation.html">Image Augmentation</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/image/mnist.html">Handwritten Digit Recognition</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/image/pretrained_models.html">Using pre-trained models in MXNet</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/loss/index.html">Losses</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/loss/custom-loss.html">Custom Loss Blocks</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/loss/kl_divergence.html">Kullback-Leibler (KL) Divergence</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/loss/loss.html">Loss functions</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/text/index.html">Text Tutorials</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/text/gnmt.html">Google Neural Machine Translation</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/text/transformer.html">Machine Translation with Transformer</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/training/index.html">Training</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/training/fit_api_tutorial.html">MXNet Gluon Fit API</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/training/trainer.html">Trainer</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/training/learning_rates/index.html">Learning Rates</a><ul> |
| <li class="toctree-l6"><a class="reference internal" href="../../../tutorials/packages/gluon/training/learning_rates/learning_rate_finder.html">Learning Rate Finder</a></li> |
| <li class="toctree-l6"><a class="reference internal" href="../../../tutorials/packages/gluon/training/learning_rates/learning_rate_schedules.html">Learning Rate Schedules</a></li> |
| <li class="toctree-l6"><a class="reference internal" href="../../../tutorials/packages/gluon/training/learning_rates/learning_rate_schedules_advanced.html">Advanced Learning Rate Schedules</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/training/normalization/index.html">Normalization Blocks</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/kvstore/index.html">KVStore</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/kvstore/kvstore.html">Distributed Key-Value Store</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/ndarray/index.html">NDArray</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/ndarray/01-ndarray-intro.html">An Intro: Manipulate Data the MXNet Way with NDArray</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/ndarray/02-ndarray-operations.html">NDArray Operations</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/ndarray/03-ndarray-contexts.html">NDArray Contexts</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/ndarray/gotchas_numpy_in_mxnet.html">Gotchas using NumPy in Apache MXNet</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/ndarray/sparse/index.html">Tutorials</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/ndarray/sparse/csr.html">CSRNDArray - NDArray in Compressed Sparse Row Storage Format</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/ndarray/sparse/row_sparse.html">RowSparseNDArray - NDArray for Sparse Gradient Updates</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/ndarray/sparse/train.html">Train a Linear Regression Model with Sparse Symbols</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/ndarray/sparse/train_gluon.html">Sparse NDArrays with Gluon</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/onnx/index.html">ONNX</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/onnx/fine_tuning_gluon.html">Fine-tuning an ONNX model</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/onnx/inference_on_onnx_model.html">Running inference on MXNet/Gluon from an ONNX model</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/onnx/super_resolution.html">Importing an ONNX model into MXNet</a></li> |
| <li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/deploy/export/onnx.html">Export ONNX Models</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/optimizer/index.html">Optimizers</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/viz/index.html">Visualization</a><ul> |
| <li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/faq/visualize_graph">Visualize networks</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../../tutorials/performance/index.html">Performance</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/performance/compression/index.html">Compression</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/compression/int8.html">Deploy with int-8</a></li> |
| <li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/faq/float16">Float16</a></li> |
| <li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/faq/gradient_compression">Gradient Compression</a></li> |
| <li class="toctree-l4"><a class="reference external" href="https://gluon-cv.mxnet.io/build/examples_deployment/int8_inference.html">GluonCV with Quantized Models</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/performance/backend/index.html">Accelerated Backend Tools</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/backend/mkldnn/index.html">Intel MKL-DNN</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/performance/backend/mkldnn/mkldnn_quantization.html">Quantize with MKL-DNN backend</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/performance/backend/mkldnn/mkldnn_readme.html">Install MXNet with MKL-DNN</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/backend/tensorrt/index.html">TensorRT</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/performance/backend/tensorrt/tensorrt.html">Optimized GPU Inference</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/backend/tvm.html">Use TVM</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/backend/profiler.html">Profiling MXNet Models</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/backend/amp.html">Using AMP: Automatic Mixed Precision</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../../tutorials/deploy/index.html">Deployment</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/deploy/export/index.html">Export</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/export/onnx.html">Exporting to ONNX format</a></li> |
| <li class="toctree-l4"><a class="reference external" href="https://gluon-cv.mxnet.io/build/examples_deployment/export_network.html">Export Gluon CV Models</a></li> |
| <li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Save / Load Parameters</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/deploy/inference/index.html">Inference</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/inference/cpp.html">Deploy into C++</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/inference/scala.html">Deploy into a Java or Scala Environment</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/inference/wine_detector.html">Real-time Object Detection with MXNet On The Raspberry Pi</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/deploy/run-on-aws/index.html">Run on AWS</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/run-on-aws/use_ec2.html">Run on an EC2 Instance</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/run-on-aws/use_sagemaker.html">Run on Amazon SageMaker</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/run-on-aws/cloud.html">MXNet on the Cloud</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../../tutorials/extend/index.html">Extend</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/extend/custom_layer.html">Custom Layers</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/extend/customop.html">Custom Numpy Operators</a></li> |
| <li class="toctree-l3"><a class="reference external" href="https://mxnet.apache.org/api/faq/new_op">New Operator Creation</a></li> |
| <li class="toctree-l3"><a class="reference external" href="https://mxnet.apache.org/api/faq/add_op_in_backend">New Operator in MXNet Backend</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l1 current"><a class="reference internal" href="../../index.html">Python API</a><ul class="current"> |
| <li class="toctree-l2"><a class="reference internal" href="../../ndarray/index.html">mxnet.ndarray</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/ndarray.html">ndarray</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/contrib/index.html">ndarray.contrib</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/image/index.html">ndarray.image</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/linalg/index.html">ndarray.linalg</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/op/index.html">ndarray.op</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/random/index.html">ndarray.random</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/register/index.html">ndarray.register</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/sparse/index.html">ndarray.sparse</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/utils/index.html">ndarray.utils</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l2 current"><a class="reference internal" href="../index.html">mxnet.gluon</a><ul class="current"> |
| <li class="toctree-l3"><a class="reference internal" href="../block.html">gluon.Block</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../hybrid_block.html">gluon.HybridBlock</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../symbol_block.html">gluon.SymbolBlock</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../constant.html">gluon.Constant</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../parameter.html">gluon.Parameter</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../parameter_dict.html">gluon.ParameterDict</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../trainer.html">gluon.Trainer</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../contrib/index.html">gluon.contrib</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../data/index.html">gluon.data</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../data/vision/index.html">data.vision</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../data/vision/datasets/index.html">vision.datasets</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../data/vision/transforms/index.html">vision.transforms</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l3 current"><a class="current reference internal" href="#">gluon.loss</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../model_zoo/index.html">gluon.model_zoo.vision</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../nn/index.html">gluon.nn</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../rnn/index.html">gluon.rnn</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../utils/index.html">gluon.utils</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../autograd/index.html">mxnet.autograd</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../initializer/index.html">mxnet.initializer</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../optimizer/index.html">mxnet.optimizer</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../lr_scheduler/index.html">mxnet.lr_scheduler</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../metric/index.html">mxnet.metric</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../kvstore/index.html">mxnet.kvstore</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../symbol/index.html">mxnet.symbol</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/symbol.html">symbol</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/contrib/index.html">symbol.contrib</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/image/index.html">symbol.image</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/linalg/index.html">symbol.linalg</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/op/index.html">symbol.op</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/random/index.html">symbol.random</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/register/index.html">symbol.register</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/sparse/index.html">symbol.sparse</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../module/index.html">mxnet.module</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../contrib/index.html">mxnet.contrib</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/autograd/index.html">contrib.autograd</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/io/index.html">contrib.io</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/ndarray/index.html">contrib.ndarray</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/onnx/index.html">contrib.onnx</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/quantization/index.html">contrib.quantization</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/symbol/index.html">contrib.symbol</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/tensorboard/index.html">contrib.tensorboard</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/tensorrt/index.html">contrib.tensorrt</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/text/index.html">contrib.text</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../mxnet/index.html">mxnet</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/attribute/index.html">mxnet.attribute</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/base/index.html">mxnet.base</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/callback/index.html">mxnet.callback</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/context/index.html">mxnet.context</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/engine/index.html">mxnet.engine</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/executor/index.html">mxnet.executor</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/executor_manager/index.html">mxnet.executor_manager</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/image/index.html">mxnet.image</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/io/index.html">mxnet.io</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/kvstore_server/index.html">mxnet.kvstore_server</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/libinfo/index.html">mxnet.libinfo</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/log/index.html">mxnet.log</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/model/index.html">mxnet.model</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/monitor/index.html">mxnet.monitor</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/name/index.html">mxnet.name</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/notebook/index.html">mxnet.notebook</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/operator/index.html">mxnet.operator</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/profiler/index.html">mxnet.profiler</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/random/index.html">mxnet.random</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/recordio/index.html">mxnet.recordio</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/registry/index.html">mxnet.registry</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/rtc/index.html">mxnet.rtc</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/test_utils/index.html">mxnet.test_utils</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/torch/index.html">mxnet.torch</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/util/index.html">mxnet.util</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/visualization/index.html">mxnet.visualization</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| </ul> |
| |
| </nav> |
| |
| </div> |
| |
| </header> |
| <main class="mdl-layout__content" tabIndex="0"> |
| |
| <script type="text/javascript" src="../../../_static/sphinx_materialdesign_theme.js "></script> |
| <header class="mdl-layout__drawer"> |
| |
| <div class="globaltoc"> |
| <span class="mdl-layout-title toc">Table Of Contents</span> |
| |
| |
| |
| <nav class="mdl-navigation"> |
| <ul class="current"> |
| <li class="toctree-l1"><a class="reference internal" href="../../../tutorials/index.html">Python Tutorials</a><ul> |
| <li class="toctree-l2"><a class="reference internal" href="../../../tutorials/getting-started/index.html">Getting Started</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/index.html">Crash Course</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/1-ndarray.html">Manipulate data with <code class="docutils literal notranslate"><span class="pre">ndarray</span></code></a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/2-nn.html">Create a neural network</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/3-autograd.html">Automatic differentiation with <code class="docutils literal notranslate"><span class="pre">autograd</span></code></a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/4-train.html">Train the neural network</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/5-predict.html">Predict with a pre-trained model</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/crash-course/6-use_gpus.html">Use GPUs</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/getting-started/to-mxnet/index.html">Moving to MXNet from Other Frameworks</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/getting-started/to-mxnet/pytorch.html">PyTorch vs Apache MXNet</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/getting-started/gluon_from_experiment_to_deployment.html">Gluon: from experiment to deployment</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/getting-started/logistic_regression_explained.html">Logistic regression explained</a></li> |
| <li class="toctree-l3"><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/image/mnist.html">MNIST</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../../tutorials/packages/index.html">Packages</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/autograd/index.html">Automatic Differentiation</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/gluon/index.html">Gluon</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/index.html">Blocks</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/custom-layer.html">Custom Layers</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/custom_layer_beginners.html">Customer Layers (Beginners)</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/hybridize.html">Hybridize</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/init.html">Initialization</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/naming.html">Parameter and Block Naming</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/nn.html">Layers and Blocks</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/parameters.html">Parameter Management</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/save_load_params.html">Saving and Loading Gluon Models</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/blocks/activations/activations.html">Activation Blocks</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/image/index.html">Image Tutorials</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/image/image-augmentation.html">Image Augmentation</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/image/mnist.html">Handwritten Digit Recognition</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/image/pretrained_models.html">Using pre-trained models in MXNet</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/loss/index.html">Losses</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/loss/custom-loss.html">Custom Loss Blocks</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/loss/kl_divergence.html">Kullback-Leibler (KL) Divergence</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/loss/loss.html">Loss functions</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/text/index.html">Text Tutorials</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/text/gnmt.html">Google Neural Machine Translation</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/text/transformer.html">Machine Translation with Transformer</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/gluon/training/index.html">Training</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/training/fit_api_tutorial.html">MXNet Gluon Fit API</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/training/trainer.html">Trainer</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/training/learning_rates/index.html">Learning Rates</a><ul> |
| <li class="toctree-l6"><a class="reference internal" href="../../../tutorials/packages/gluon/training/learning_rates/learning_rate_finder.html">Learning Rate Finder</a></li> |
| <li class="toctree-l6"><a class="reference internal" href="../../../tutorials/packages/gluon/training/learning_rates/learning_rate_schedules.html">Learning Rate Schedules</a></li> |
| <li class="toctree-l6"><a class="reference internal" href="../../../tutorials/packages/gluon/training/learning_rates/learning_rate_schedules_advanced.html">Advanced Learning Rate Schedules</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/gluon/training/normalization/index.html">Normalization Blocks</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/kvstore/index.html">KVStore</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/kvstore/kvstore.html">Distributed Key-Value Store</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/ndarray/index.html">NDArray</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/ndarray/01-ndarray-intro.html">An Intro: Manipulate Data the MXNet Way with NDArray</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/ndarray/02-ndarray-operations.html">NDArray Operations</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/ndarray/03-ndarray-contexts.html">NDArray Contexts</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/ndarray/gotchas_numpy_in_mxnet.html">Gotchas using NumPy in Apache MXNet</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/ndarray/sparse/index.html">Tutorials</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/ndarray/sparse/csr.html">CSRNDArray - NDArray in Compressed Sparse Row Storage Format</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/ndarray/sparse/row_sparse.html">RowSparseNDArray - NDArray for Sparse Gradient Updates</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/ndarray/sparse/train.html">Train a Linear Regression Model with Sparse Symbols</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/packages/ndarray/sparse/train_gluon.html">Sparse NDArrays with Gluon</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/onnx/index.html">ONNX</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/onnx/fine_tuning_gluon.html">Fine-tuning an ONNX model</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/onnx/inference_on_onnx_model.html">Running inference on MXNet/Gluon from an ONNX model</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/packages/onnx/super_resolution.html">Importing an ONNX model into MXNet</a></li> |
| <li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/deploy/export/onnx.html">Export ONNX Models</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/optimizer/index.html">Optimizers</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/packages/viz/index.html">Visualization</a><ul> |
| <li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/faq/visualize_graph">Visualize networks</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../../tutorials/performance/index.html">Performance</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/performance/compression/index.html">Compression</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/compression/int8.html">Deploy with int-8</a></li> |
| <li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/faq/float16">Float16</a></li> |
| <li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/faq/gradient_compression">Gradient Compression</a></li> |
| <li class="toctree-l4"><a class="reference external" href="https://gluon-cv.mxnet.io/build/examples_deployment/int8_inference.html">GluonCV with Quantized Models</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/performance/backend/index.html">Accelerated Backend Tools</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/backend/mkldnn/index.html">Intel MKL-DNN</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/performance/backend/mkldnn/mkldnn_quantization.html">Quantize with MKL-DNN backend</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/performance/backend/mkldnn/mkldnn_readme.html">Install MXNet with MKL-DNN</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/backend/tensorrt/index.html">TensorRT</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../../../tutorials/performance/backend/tensorrt/tensorrt.html">Optimized GPU Inference</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/backend/tvm.html">Use TVM</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/backend/profiler.html">Profiling MXNet Models</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/performance/backend/amp.html">Using AMP: Automatic Mixed Precision</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../../tutorials/deploy/index.html">Deployment</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/deploy/export/index.html">Export</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/export/onnx.html">Exporting to ONNX format</a></li> |
| <li class="toctree-l4"><a class="reference external" href="https://gluon-cv.mxnet.io/build/examples_deployment/export_network.html">Export Gluon CV Models</a></li> |
| <li class="toctree-l4"><a class="reference external" href="https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html">Save / Load Parameters</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/deploy/inference/index.html">Inference</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/inference/cpp.html">Deploy into C++</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/inference/scala.html">Deploy into a Java or Scala Environment</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/inference/wine_detector.html">Real-time Object Detection with MXNet On The Raspberry Pi</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/deploy/run-on-aws/index.html">Run on AWS</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/run-on-aws/use_ec2.html">Run on an EC2 Instance</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/run-on-aws/use_sagemaker.html">Run on Amazon SageMaker</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="../../../tutorials/deploy/run-on-aws/cloud.html">MXNet on the Cloud</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../../tutorials/extend/index.html">Extend</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/extend/custom_layer.html">Custom Layers</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../../tutorials/extend/customop.html">Custom Numpy Operators</a></li> |
| <li class="toctree-l3"><a class="reference external" href="https://mxnet.apache.org/api/faq/new_op">New Operator Creation</a></li> |
| <li class="toctree-l3"><a class="reference external" href="https://mxnet.apache.org/api/faq/add_op_in_backend">New Operator in MXNet Backend</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l1 current"><a class="reference internal" href="../../index.html">Python API</a><ul class="current"> |
| <li class="toctree-l2"><a class="reference internal" href="../../ndarray/index.html">mxnet.ndarray</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/ndarray.html">ndarray</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/contrib/index.html">ndarray.contrib</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/image/index.html">ndarray.image</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/linalg/index.html">ndarray.linalg</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/op/index.html">ndarray.op</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/random/index.html">ndarray.random</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/register/index.html">ndarray.register</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/sparse/index.html">ndarray.sparse</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../ndarray/utils/index.html">ndarray.utils</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l2 current"><a class="reference internal" href="../index.html">mxnet.gluon</a><ul class="current"> |
| <li class="toctree-l3"><a class="reference internal" href="../block.html">gluon.Block</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../hybrid_block.html">gluon.HybridBlock</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../symbol_block.html">gluon.SymbolBlock</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../constant.html">gluon.Constant</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../parameter.html">gluon.Parameter</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../parameter_dict.html">gluon.ParameterDict</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../trainer.html">gluon.Trainer</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../contrib/index.html">gluon.contrib</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../data/index.html">gluon.data</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="../data/vision/index.html">data.vision</a><ul> |
| <li class="toctree-l5"><a class="reference internal" href="../data/vision/datasets/index.html">vision.datasets</a></li> |
| <li class="toctree-l5"><a class="reference internal" href="../data/vision/transforms/index.html">vision.transforms</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| <li class="toctree-l3 current"><a class="current reference internal" href="#">gluon.loss</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../model_zoo/index.html">gluon.model_zoo.vision</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../nn/index.html">gluon.nn</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../rnn/index.html">gluon.rnn</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../utils/index.html">gluon.utils</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../autograd/index.html">mxnet.autograd</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../initializer/index.html">mxnet.initializer</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../optimizer/index.html">mxnet.optimizer</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../lr_scheduler/index.html">mxnet.lr_scheduler</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../metric/index.html">mxnet.metric</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../kvstore/index.html">mxnet.kvstore</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../symbol/index.html">mxnet.symbol</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/symbol.html">symbol</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/contrib/index.html">symbol.contrib</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/image/index.html">symbol.image</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/linalg/index.html">symbol.linalg</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/op/index.html">symbol.op</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/random/index.html">symbol.random</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/register/index.html">symbol.register</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../symbol/sparse/index.html">symbol.sparse</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../module/index.html">mxnet.module</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="../../contrib/index.html">mxnet.contrib</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/autograd/index.html">contrib.autograd</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/io/index.html">contrib.io</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/ndarray/index.html">contrib.ndarray</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/onnx/index.html">contrib.onnx</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/quantization/index.html">contrib.quantization</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/symbol/index.html">contrib.symbol</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/tensorboard/index.html">contrib.tensorboard</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/tensorrt/index.html">contrib.tensorrt</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../contrib/text/index.html">contrib.text</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="../../mxnet/index.html">mxnet</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/attribute/index.html">mxnet.attribute</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/base/index.html">mxnet.base</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/callback/index.html">mxnet.callback</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/context/index.html">mxnet.context</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/engine/index.html">mxnet.engine</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/executor/index.html">mxnet.executor</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/executor_manager/index.html">mxnet.executor_manager</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/image/index.html">mxnet.image</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/io/index.html">mxnet.io</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/kvstore_server/index.html">mxnet.kvstore_server</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/libinfo/index.html">mxnet.libinfo</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/log/index.html">mxnet.log</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/model/index.html">mxnet.model</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/monitor/index.html">mxnet.monitor</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/name/index.html">mxnet.name</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/notebook/index.html">mxnet.notebook</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/operator/index.html">mxnet.operator</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/profiler/index.html">mxnet.profiler</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/random/index.html">mxnet.random</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/recordio/index.html">mxnet.recordio</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/registry/index.html">mxnet.registry</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/rtc/index.html">mxnet.rtc</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/test_utils/index.html">mxnet.test_utils</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/torch/index.html">mxnet.torch</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/util/index.html">mxnet.util</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="../../mxnet/visualization/index.html">mxnet.visualization</a></li> |
| </ul> |
| </li> |
| </ul> |
| </li> |
| </ul> |
| |
| </nav> |
| |
| </div> |
| |
| </header> |
| |
| <div class="document"> |
| <div class="page-content" role="main"> |
| |
| <div class="section" id="gluon-loss"> |
| <h1>gluon.loss<a class="headerlink" href="#gluon-loss" title="Permalink to this headline">¶</a></h1> |
| <p>Gluon provides pre-defined loss functions in the <a class="reference internal" href="#module-mxnet.gluon.loss" title="mxnet.gluon.loss"><code class="xref py py-mod docutils literal notranslate"><span class="pre">mxnet.gluon.loss</span></code></a> |
| module.</p> |
| <span class="target" id="module-mxnet.gluon.loss"></span><p>losses for training neural networks</p> |
| <p><strong>Classes</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.Loss" title="mxnet.gluon.loss.Loss"><code class="xref py py-obj docutils literal notranslate"><span class="pre">Loss</span></code></a>(weight, batch_axis, **kwargs)</p></td> |
| <td><p>Base class for loss.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L2Loss" title="mxnet.gluon.loss.L2Loss"><code class="xref py py-obj docutils literal notranslate"><span class="pre">L2Loss</span></code></a>([weight, batch_axis])</p></td> |
| <td><p>Calculates the mean squared error between <cite>label</cite> and <cite>pred</cite>.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L1Loss" title="mxnet.gluon.loss.L1Loss"><code class="xref py py-obj docutils literal notranslate"><span class="pre">L1Loss</span></code></a>([weight, batch_axis])</p></td> |
| <td><p>Calculates the mean absolute error between <cite>label</cite> and <cite>pred</cite>.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss" title="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss"><code class="xref py py-obj docutils literal notranslate"><span class="pre">SigmoidBinaryCrossEntropyLoss</span></code></a>([…])</p></td> |
| <td><p>The cross-entropy loss for binary classification.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SigmoidBCELoss" title="mxnet.gluon.loss.SigmoidBCELoss"><code class="xref py py-obj docutils literal notranslate"><span class="pre">SigmoidBCELoss</span></code></a></p></td> |
| <td><p>The cross-entropy loss for binary classification.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss" title="mxnet.gluon.loss.SoftmaxCrossEntropyLoss"><code class="xref py py-obj docutils literal notranslate"><span class="pre">SoftmaxCrossEntropyLoss</span></code></a>([axis, …])</p></td> |
| <td><p>Computes the softmax cross entropy loss.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SoftmaxCELoss" title="mxnet.gluon.loss.SoftmaxCELoss"><code class="xref py py-obj docutils literal notranslate"><span class="pre">SoftmaxCELoss</span></code></a></p></td> |
| <td><p>Computes the softmax cross entropy loss.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.KLDivLoss" title="mxnet.gluon.loss.KLDivLoss"><code class="xref py py-obj docutils literal notranslate"><span class="pre">KLDivLoss</span></code></a>([from_logits, axis, weight, …])</p></td> |
| <td><p>The Kullback-Leibler divergence loss.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CTCLoss" title="mxnet.gluon.loss.CTCLoss"><code class="xref py py-obj docutils literal notranslate"><span class="pre">CTCLoss</span></code></a>([layout, label_layout, weight])</p></td> |
| <td><p>Connectionist Temporal Classification Loss.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HuberLoss" title="mxnet.gluon.loss.HuberLoss"><code class="xref py py-obj docutils literal notranslate"><span class="pre">HuberLoss</span></code></a>([rho, weight, batch_axis])</p></td> |
| <td><p>Calculates smoothed L1 loss that is equal to L1 loss if absolute error exceeds rho but is equal to L2 loss otherwise.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HingeLoss" title="mxnet.gluon.loss.HingeLoss"><code class="xref py py-obj docutils literal notranslate"><span class="pre">HingeLoss</span></code></a>([margin, weight, batch_axis])</p></td> |
| <td><p>Calculates the hinge loss function often used in SVMs:</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SquaredHingeLoss" title="mxnet.gluon.loss.SquaredHingeLoss"><code class="xref py py-obj docutils literal notranslate"><span class="pre">SquaredHingeLoss</span></code></a>([margin, weight, batch_axis])</p></td> |
| <td><p>Calculates the soft-margin loss function used in SVMs:</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.LogisticLoss" title="mxnet.gluon.loss.LogisticLoss"><code class="xref py py-obj docutils literal notranslate"><span class="pre">LogisticLoss</span></code></a>([weight, batch_axis, label_format])</p></td> |
| <td><p>Calculates the logistic loss (for binary losses only):</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.TripletLoss" title="mxnet.gluon.loss.TripletLoss"><code class="xref py py-obj docutils literal notranslate"><span class="pre">TripletLoss</span></code></a>([margin, weight, batch_axis])</p></td> |
| <td><p>Calculates triplet loss given three input tensors and a positive margin.</p></td> |
| </tr> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.PoissonNLLLoss" title="mxnet.gluon.loss.PoissonNLLLoss"><code class="xref py py-obj docutils literal notranslate"><span class="pre">PoissonNLLLoss</span></code></a>([weight, from_logits, …])</p></td> |
| <td><p>For a target (Random Variable) in a Poisson distribution, the function calculates the Negative Log likelihood loss.</p></td> |
| </tr> |
| <tr class="row-even"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CosineEmbeddingLoss" title="mxnet.gluon.loss.CosineEmbeddingLoss"><code class="xref py py-obj docutils literal notranslate"><span class="pre">CosineEmbeddingLoss</span></code></a>([weight, batch_axis, margin])</p></td> |
| <td><p>For a target label 1 or -1, vectors input1 and input2, the function computes the cosine distance between the vectors.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <dl class="class"> |
| <dt id="mxnet.gluon.loss.Loss"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.loss.</code><code class="sig-name descname">Loss</code><span class="sig-paren">(</span><em class="sig-param">weight</em>, <em class="sig-param">batch_axis</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#Loss"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.Loss" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.block.HybridBlock</span></code></p> |
| <p>Base class for loss.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>weight</strong> (<em>float</em><em> or </em><em>None</em>) – Global scalar weight for loss.</p></li> |
| <li><p><strong>batch_axis</strong> (<em>int</em><em>, </em><em>default 0</em>) – The axis that represents mini-batch.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <p><strong>Methods</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.Loss.hybrid_forward" title="mxnet.gluon.loss.Loss.hybrid_forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybrid_forward</span></code></a>(F, x, *args, **kwargs)</p></td> |
| <td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <dl class="method"> |
| <dt id="mxnet.gluon.loss.Loss.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">x</em>, <em class="sig-param">*args</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#Loss.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.Loss.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.loss.L2Loss"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.loss.</code><code class="sig-name descname">L2Loss</code><span class="sig-paren">(</span><em class="sig-param">weight=1.0</em>, <em class="sig-param">batch_axis=0</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#L2Loss"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.L2Loss" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <a class="reference internal" href="#mxnet.gluon.loss.Loss" title="mxnet.gluon.loss.Loss"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.loss.Loss</span></code></a></p> |
| <p>Calculates the mean squared error between <cite>label</cite> and <cite>pred</cite>.</p> |
| <div class="math notranslate nohighlight"> |
| \[L = \frac{1}{2} \sum_i \vert {label}_i - {pred}_i \vert^2.\]</div> |
| <p><strong>Methods</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L2Loss.hybrid_forward" title="mxnet.gluon.loss.L2Loss.hybrid_forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybrid_forward</span></code></a>(F, pred, label[, sample_weight])</p></td> |
| <td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <p><cite>label</cite> and <cite>pred</cite> can have arbitrary shape as long as they have the same |
| number of elements.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>weight</strong> (<em>float</em><em> or </em><em>None</em>) – Global scalar weight for loss.</p></li> |
| <li><p><strong>batch_axis</strong> (<em>int</em><em>, </em><em>default 0</em>) – The axis that represents mini-batch.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="simple"> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>pred</strong>: prediction tensor with arbitrary shape</p></li> |
| <li><p><strong>label</strong>: target tensor with the same size as pred.</p></li> |
| <li><p><strong>sample_weight</strong>: element-wise weighting tensor. Must be broadcastable |
| to the same shape as pred. For example, if pred has shape (64, 10) |
| and you want to weigh each sample in the batch separately, |
| sample_weight should have shape (64, 1).</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul class="simple"> |
| <li><p><strong>loss</strong>: loss tensor with shape (batch_size,). Dimenions other than |
| batch_axis are averaged out.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="method"> |
| <dt id="mxnet.gluon.loss.L2Loss.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">pred</em>, <em class="sig-param">label</em>, <em class="sig-param">sample_weight=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#L2Loss.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.L2Loss.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.loss.L1Loss"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.loss.</code><code class="sig-name descname">L1Loss</code><span class="sig-paren">(</span><em class="sig-param">weight=None</em>, <em class="sig-param">batch_axis=0</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#L1Loss"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.L1Loss" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <a class="reference internal" href="#mxnet.gluon.loss.Loss" title="mxnet.gluon.loss.Loss"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.loss.Loss</span></code></a></p> |
| <p>Calculates the mean absolute error between <cite>label</cite> and <cite>pred</cite>.</p> |
| <div class="math notranslate nohighlight"> |
| \[L = \sum_i \vert {label}_i - {pred}_i \vert.\]</div> |
| <p><strong>Methods</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.L1Loss.hybrid_forward" title="mxnet.gluon.loss.L1Loss.hybrid_forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybrid_forward</span></code></a>(F, pred, label[, sample_weight])</p></td> |
| <td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <p><cite>label</cite> and <cite>pred</cite> can have arbitrary shape as long as they have the same |
| number of elements.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>weight</strong> (<em>float</em><em> or </em><em>None</em>) – Global scalar weight for loss.</p></li> |
| <li><p><strong>batch_axis</strong> (<em>int</em><em>, </em><em>default 0</em>) – The axis that represents mini-batch.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="simple"> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>pred</strong>: prediction tensor with arbitrary shape</p></li> |
| <li><p><strong>label</strong>: target tensor with the same size as pred.</p></li> |
| <li><p><strong>sample_weight</strong>: element-wise weighting tensor. Must be broadcastable |
| to the same shape as pred. For example, if pred has shape (64, 10) |
| and you want to weigh each sample in the batch separately, |
| sample_weight should have shape (64, 1).</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul class="simple"> |
| <li><p><strong>loss</strong>: loss tensor with shape (batch_size,). Dimenions other than |
| batch_axis are averaged out.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="method"> |
| <dt id="mxnet.gluon.loss.L1Loss.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">pred</em>, <em class="sig-param">label</em>, <em class="sig-param">sample_weight=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#L1Loss.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.L1Loss.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.loss.</code><code class="sig-name descname">SigmoidBinaryCrossEntropyLoss</code><span class="sig-paren">(</span><em class="sig-param">from_sigmoid=False</em>, <em class="sig-param">weight=None</em>, <em class="sig-param">batch_axis=0</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#SigmoidBinaryCrossEntropyLoss"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <a class="reference internal" href="#mxnet.gluon.loss.Loss" title="mxnet.gluon.loss.Loss"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.loss.Loss</span></code></a></p> |
| <p>The cross-entropy loss for binary classification. (alias: SigmoidBCELoss)</p> |
| <p>BCE loss is useful when training logistic regression. If <cite>from_sigmoid</cite> |
| is False (default), this loss computes:</p> |
| <div class="math notranslate nohighlight"> |
| \[ \begin{align}\begin{aligned}prob = \frac{1}{1 + \exp(-{pred})}\\L = - \sum_i {label}_i * \log({prob}_i) * pos\_weight + |
| (1 - {label}_i) * \log(1 - {prob}_i)\end{aligned}\end{align} \]</div> |
| <p><strong>Methods</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.hybrid_forward" title="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.hybrid_forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybrid_forward</span></code></a>(F, pred, label[, …])</p></td> |
| <td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <p>If <cite>from_sigmoid</cite> is True, this loss computes:</p> |
| <div class="math notranslate nohighlight"> |
| \[L = - \sum_i {label}_i * \log({pred}_i) * pos\_weight + |
| (1 - {label}_i) * \log(1 - {pred}_i)\]</div> |
| <p>A tensor <cite>pos_weight > 1</cite> decreases the false negative count, hence increasing |
| the recall. |
| Conversely setting <cite>pos_weight < 1</cite> decreases the false positive count and |
| increases the precision.</p> |
| <p><cite>pred</cite> and <cite>label</cite> can have arbitrary shape as long as they have the same |
| number of elements.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>from_sigmoid</strong> (bool, default is <cite>False</cite>) – Whether the input is from the output of sigmoid. Set this to false will make |
| the loss calculate sigmoid and BCE together, which is more numerically |
| stable through log-sum-exp trick.</p></li> |
| <li><p><strong>weight</strong> (<em>float</em><em> or </em><em>None</em>) – Global scalar weight for loss.</p></li> |
| <li><p><strong>batch_axis</strong> (<em>int</em><em>, </em><em>default 0</em>) – The axis that represents mini-batch.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="simple"> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>pred</strong>: prediction tensor with arbitrary shape</p></li> |
| <li><p><strong>label</strong>: target tensor with values in range <cite>[0, 1]</cite>. Must have the |
| same size as <cite>pred</cite>.</p></li> |
| <li><p><strong>sample_weight</strong>: element-wise weighting tensor. Must be broadcastable |
| to the same shape as pred. For example, if pred has shape (64, 10) |
| and you want to weigh each sample in the batch separately, |
| sample_weight should have shape (64, 1).</p></li> |
| <li><p><strong>pos_weight</strong>: a weighting tensor of positive examples. Must be a vector with length |
| equal to the number of classes.For example, if pred has shape (64, 10), |
| pos_weight should have shape (1, 10).</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul class="simple"> |
| <li><p><strong>loss</strong>: loss tensor with shape (batch_size,). Dimenions other than |
| batch_axis are averaged out.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="method"> |
| <dt id="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">pred</em>, <em class="sig-param">label</em>, <em class="sig-param">sample_weight=None</em>, <em class="sig-param">pos_weight=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#SigmoidBinaryCrossEntropyLoss.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="attribute"> |
| <dt id="mxnet.gluon.loss.SigmoidBCELoss"> |
| <code class="sig-prename descclassname">mxnet.gluon.loss.</code><code class="sig-name descname">SigmoidBCELoss</code><a class="headerlink" href="#mxnet.gluon.loss.SigmoidBCELoss" title="Permalink to this definition">¶</a></dt> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybrid_forward</span></code>(F, pred, label[, …])</p></td> |
| <td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <p><strong>Methods</strong></p> |
| <dd><p>alias of <a class="reference internal" href="#mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss" title="mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.loss.SigmoidBinaryCrossEntropyLoss</span></code></a></p> |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.loss.SoftmaxCrossEntropyLoss"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.loss.</code><code class="sig-name descname">SoftmaxCrossEntropyLoss</code><span class="sig-paren">(</span><em class="sig-param">axis=-1</em>, <em class="sig-param">sparse_label=True</em>, <em class="sig-param">from_logits=False</em>, <em class="sig-param">weight=None</em>, <em class="sig-param">batch_axis=0</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#SoftmaxCrossEntropyLoss"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <a class="reference internal" href="#mxnet.gluon.loss.Loss" title="mxnet.gluon.loss.Loss"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.loss.Loss</span></code></a></p> |
| <p>Computes the softmax cross entropy loss. (alias: SoftmaxCELoss)</p> |
| <p>If <cite>sparse_label</cite> is <cite>True</cite> (default), label should contain integer |
| category indicators:</p> |
| <div class="math notranslate nohighlight"> |
| \[ \begin{align}\begin{aligned}\DeclareMathOperator{softmax}{softmax}\\p = \softmax({pred})\\L = -\sum_i \log p_{i,{label}_i}\end{aligned}\end{align} \]</div> |
| <p><strong>Methods</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.hybrid_forward" title="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.hybrid_forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybrid_forward</span></code></a>(F, pred, label[, sample_weight])</p></td> |
| <td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <p><cite>label</cite>’s shape should be <cite>pred</cite>’s shape with the <cite>axis</cite> dimension removed. |
| i.e. for <cite>pred</cite> with shape (1,2,3,4) and <cite>axis = 2</cite>, <cite>label</cite>’s shape should |
| be (1,2,4).</p> |
| <p>If <cite>sparse_label</cite> is <cite>False</cite>, <cite>label</cite> should contain probability distribution |
| and <cite>label</cite>’s shape should be the same with <cite>pred</cite>:</p> |
| <div class="math notranslate nohighlight"> |
| \[ \begin{align}\begin{aligned}p = \softmax({pred})\\L = -\sum_i \sum_j {label}_j \log p_{ij}\end{aligned}\end{align} \]</div> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>axis</strong> (<em>int</em><em>, </em><em>default -1</em>) – The axis to sum over when computing softmax and entropy.</p></li> |
| <li><p><strong>sparse_label</strong> (<em>bool</em><em>, </em><em>default True</em>) – Whether label is an integer array instead of probability distribution.</p></li> |
| <li><p><strong>from_logits</strong> (<em>bool</em><em>, </em><em>default False</em>) – Whether input is a log probability (usually from log_softmax) instead |
| of unnormalized numbers.</p></li> |
| <li><p><strong>weight</strong> (<em>float</em><em> or </em><em>None</em>) – Global scalar weight for loss.</p></li> |
| <li><p><strong>batch_axis</strong> (<em>int</em><em>, </em><em>default 0</em>) – The axis that represents mini-batch.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="simple"> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>pred</strong>: the prediction tensor, where the <cite>batch_axis</cite> dimension |
| ranges over batch size and <cite>axis</cite> dimension ranges over the number |
| of classes.</p></li> |
| <li><p><strong>label</strong>: the truth tensor. When <cite>sparse_label</cite> is True, <cite>label</cite>’s |
| shape should be <cite>pred</cite>’s shape with the <cite>axis</cite> dimension removed. |
| i.e. for <cite>pred</cite> with shape (1,2,3,4) and <cite>axis = 2</cite>, <cite>label</cite>’s shape |
| should be (1,2,4) and values should be integers between 0 and 2. If |
| <cite>sparse_label</cite> is False, <cite>label</cite>’s shape must be the same as <cite>pred</cite> |
| and values should be floats in the range <cite>[0, 1]</cite>.</p></li> |
| <li><p><strong>sample_weight</strong>: element-wise weighting tensor. Must be broadcastable |
| to the same shape as label. For example, if label has shape (64, 10) |
| and you want to weigh each sample in the batch separately, |
| sample_weight should have shape (64, 1).</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul class="simple"> |
| <li><p><strong>loss</strong>: loss tensor with shape (batch_size,). Dimenions other than |
| batch_axis are averaged out.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="method"> |
| <dt id="mxnet.gluon.loss.SoftmaxCrossEntropyLoss.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">pred</em>, <em class="sig-param">label</em>, <em class="sig-param">sample_weight=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#SoftmaxCrossEntropyLoss.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="attribute"> |
| <dt id="mxnet.gluon.loss.SoftmaxCELoss"> |
| <code class="sig-prename descclassname">mxnet.gluon.loss.</code><code class="sig-name descname">SoftmaxCELoss</code><a class="headerlink" href="#mxnet.gluon.loss.SoftmaxCELoss" title="Permalink to this definition">¶</a></dt> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybrid_forward</span></code>(F, pred, label[, sample_weight])</p></td> |
| <td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <p><strong>Methods</strong></p> |
| <dd><p>alias of <a class="reference internal" href="#mxnet.gluon.loss.SoftmaxCrossEntropyLoss" title="mxnet.gluon.loss.SoftmaxCrossEntropyLoss"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.loss.SoftmaxCrossEntropyLoss</span></code></a></p> |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.loss.KLDivLoss"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.loss.</code><code class="sig-name descname">KLDivLoss</code><span class="sig-paren">(</span><em class="sig-param">from_logits=True</em>, <em class="sig-param">axis=-1</em>, <em class="sig-param">weight=None</em>, <em class="sig-param">batch_axis=0</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#KLDivLoss"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.KLDivLoss" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <a class="reference internal" href="#mxnet.gluon.loss.Loss" title="mxnet.gluon.loss.Loss"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.loss.Loss</span></code></a></p> |
| <p>The Kullback-Leibler divergence loss.</p> |
| <p>KL divergence measures the distance between contiguous distributions. It |
| can be used to minimize information loss when approximating a distribution. |
| If <cite>from_logits</cite> is True (default), loss is defined as:</p> |
| <div class="math notranslate nohighlight"> |
| \[L = \sum_i {label}_i * \big[\log({label}_i) - {pred}_i\big]\]</div> |
| <p><strong>Methods</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.KLDivLoss.hybrid_forward" title="mxnet.gluon.loss.KLDivLoss.hybrid_forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybrid_forward</span></code></a>(F, pred, label[, sample_weight])</p></td> |
| <td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <p>If <cite>from_logits</cite> is False, loss is defined as:</p> |
| <div class="math notranslate nohighlight"> |
| \[ \begin{align}\begin{aligned}\DeclareMathOperator{softmax}{softmax}\\prob = \softmax({pred})\\L = \sum_i {label}_i * \big[\log({label}_i) - \log({prob}_i)\big]\end{aligned}\end{align} \]</div> |
| <p><cite>label</cite> and <cite>pred</cite> can have arbitrary shape as long as they have the same |
| number of elements.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>from_logits</strong> (bool, default is <cite>True</cite>) – Whether the input is log probability (usually from log_softmax) instead |
| of unnormalized numbers.</p></li> |
| <li><p><strong>axis</strong> (<em>int</em><em>, </em><em>default -1</em>) – The dimension along with to compute softmax. Only used when <cite>from_logits</cite> |
| is False.</p></li> |
| <li><p><strong>weight</strong> (<em>float</em><em> or </em><em>None</em>) – Global scalar weight for loss.</p></li> |
| <li><p><strong>batch_axis</strong> (<em>int</em><em>, </em><em>default 0</em>) – The axis that represents mini-batch.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="simple"> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>pred</strong>: prediction tensor with arbitrary shape. If <cite>from_logits</cite> is |
| True, <cite>pred</cite> should be log probabilities. Otherwise, it should be |
| unnormalized predictions, i.e. from a dense layer.</p></li> |
| <li><p><strong>label</strong>: truth tensor with values in range <cite>(0, 1)</cite>. Must have |
| the same size as <cite>pred</cite>.</p></li> |
| <li><p><strong>sample_weight</strong>: element-wise weighting tensor. Must be broadcastable |
| to the same shape as pred. For example, if pred has shape (64, 10) |
| and you want to weigh each sample in the batch separately, |
| sample_weight should have shape (64, 1).</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul class="simple"> |
| <li><p><strong>loss</strong>: loss tensor with shape (batch_size,). Dimenions other than |
| batch_axis are averaged out.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <p class="rubric">References</p> |
| <p><a class="reference external" href="https://en.wikipedia.org/wiki/Kullback-Leibler_divergence">Kullback-Leibler divergence</a></p> |
| <dl class="method"> |
| <dt id="mxnet.gluon.loss.KLDivLoss.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">pred</em>, <em class="sig-param">label</em>, <em class="sig-param">sample_weight=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#KLDivLoss.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.KLDivLoss.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.loss.CTCLoss"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.loss.</code><code class="sig-name descname">CTCLoss</code><span class="sig-paren">(</span><em class="sig-param">layout='NTC'</em>, <em class="sig-param">label_layout='NT'</em>, <em class="sig-param">weight=None</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#CTCLoss"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.CTCLoss" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <a class="reference internal" href="#mxnet.gluon.loss.Loss" title="mxnet.gluon.loss.Loss"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.loss.Loss</span></code></a></p> |
| <p>Connectionist Temporal Classification Loss.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>layout</strong> (<em>str</em><em>, </em><em>default 'NTC'</em>) – Layout of prediction tensor. ‘N’, ‘T’, ‘C’ stands for batch size, |
| sequence length, and alphabet_size respectively.</p></li> |
| <li><p><strong>label_layout</strong> (<em>str</em><em>, </em><em>default 'NT'</em>) – Layout of the labels. ‘N’, ‘T’ stands for batch size, and sequence |
| length respectively.</p></li> |
| <li><p><strong>weight</strong> (<em>float</em><em> or </em><em>None</em>) – Global scalar weight for loss.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <p><strong>Methods</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CTCLoss.hybrid_forward" title="mxnet.gluon.loss.CTCLoss.hybrid_forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybrid_forward</span></code></a>(F, pred, label[, …])</p></td> |
| <td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <dl class="simple"> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>pred</strong>: unnormalized prediction tensor (before softmax). |
| Its shape depends on <cite>layout</cite>. If <cite>layout</cite> is ‘TNC’, pred |
| should have shape <cite>(sequence_length, batch_size, alphabet_size)</cite>. |
| Note that in the last dimension, index <cite>alphabet_size-1</cite> is reserved |
| for internal use as blank label. So <cite>alphabet_size</cite> is one plus the |
| actual alphabet size.</p></li> |
| <li><p><strong>label</strong>: zero-based label tensor. Its shape depends on <cite>label_layout</cite>. |
| If <cite>label_layout</cite> is ‘TN’, <cite>label</cite> should have shape |
| <cite>(label_sequence_length, batch_size)</cite>.</p></li> |
| <li><p><strong>pred_lengths</strong>: optional (default None), used for specifying the |
| length of each entry when different <cite>pred</cite> entries in the same batch |
| have different lengths. <cite>pred_lengths</cite> should have shape <cite>(batch_size,)</cite>.</p></li> |
| <li><p><strong>label_lengths</strong>: optional (default None), used for specifying the |
| length of each entry when different <cite>label</cite> entries in the same batch |
| have different lengths. <cite>label_lengths</cite> should have shape <cite>(batch_size,)</cite>.</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul class="simple"> |
| <li><p><strong>loss</strong>: output loss has shape <cite>(batch_size,)</cite>.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <p><strong>Example</strong>: suppose the vocabulary is <cite>[a, b, c]</cite>, and in one batch we |
| have three sequences ‘ba’, ‘cbb’, and ‘abac’. We can index the labels as |
| <cite>{‘a’: 0, ‘b’: 1, ‘c’: 2, blank: 3}</cite>. Then <cite>alphabet_size</cite> should be 4, |
| where label 3 is reserved for internal use by <cite>CTCLoss</cite>. We then need to |
| pad each sequence with <cite>-1</cite> to make a rectangular <cite>label</cite> tensor:</p> |
| <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="p">[[</span><span class="mi">1</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="o">-</span><span class="mi">1</span><span class="p">,</span> <span class="o">-</span><span class="mi">1</span><span class="p">],</span> |
| <span class="p">[</span><span class="mi">2</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="o">-</span><span class="mi">1</span><span class="p">],</span> |
| <span class="p">[</span><span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">2</span><span class="p">]]</span> |
| </pre></div> |
| </div> |
| <p class="rubric">References</p> |
| <p><a class="reference external" href="http://www.cs.toronto.edu/~graves/icml_2006.pdf">Connectionist Temporal Classification: Labelling Unsegmented |
| Sequence Data with Recurrent Neural Networks</a></p> |
| <dl class="method"> |
| <dt id="mxnet.gluon.loss.CTCLoss.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">pred</em>, <em class="sig-param">label</em>, <em class="sig-param">pred_lengths=None</em>, <em class="sig-param">label_lengths=None</em>, <em class="sig-param">sample_weight=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#CTCLoss.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.CTCLoss.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.loss.HuberLoss"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.loss.</code><code class="sig-name descname">HuberLoss</code><span class="sig-paren">(</span><em class="sig-param">rho=1</em>, <em class="sig-param">weight=None</em>, <em class="sig-param">batch_axis=0</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#HuberLoss"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.HuberLoss" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <a class="reference internal" href="#mxnet.gluon.loss.Loss" title="mxnet.gluon.loss.Loss"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.loss.Loss</span></code></a></p> |
| <p>Calculates smoothed L1 loss that is equal to L1 loss if absolute error |
| exceeds rho but is equal to L2 loss otherwise. Also called SmoothedL1 loss.</p> |
| <div class="math notranslate nohighlight"> |
| \[\begin{split}L = \sum_i \begin{cases} \frac{1}{2 {rho}} ({label}_i - {pred}_i)^2 & |
| \text{ if } |{label}_i - {pred}_i| < {rho} \\ |
| |{label}_i - {pred}_i| - \frac{{rho}}{2} & |
| \text{ otherwise } |
| \end{cases}\end{split}\]</div> |
| <p><strong>Methods</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HuberLoss.hybrid_forward" title="mxnet.gluon.loss.HuberLoss.hybrid_forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybrid_forward</span></code></a>(F, pred, label[, sample_weight])</p></td> |
| <td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <p><cite>label</cite> and <cite>pred</cite> can have arbitrary shape as long as they have the same |
| number of elements.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>rho</strong> (<em>float</em><em>, </em><em>default 1</em>) – Threshold for trimmed mean estimator.</p></li> |
| <li><p><strong>weight</strong> (<em>float</em><em> or </em><em>None</em>) – Global scalar weight for loss.</p></li> |
| <li><p><strong>batch_axis</strong> (<em>int</em><em>, </em><em>default 0</em>) – The axis that represents mini-batch.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="simple"> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>pred</strong>: prediction tensor with arbitrary shape</p></li> |
| <li><p><strong>label</strong>: target tensor with the same size as pred.</p></li> |
| <li><p><strong>sample_weight</strong>: element-wise weighting tensor. Must be broadcastable |
| to the same shape as pred. For example, if pred has shape (64, 10) |
| and you want to weigh each sample in the batch separately, |
| sample_weight should have shape (64, 1).</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul class="simple"> |
| <li><p><strong>loss</strong>: loss tensor with shape (batch_size,). Dimenions other than |
| batch_axis are averaged out.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="method"> |
| <dt id="mxnet.gluon.loss.HuberLoss.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">pred</em>, <em class="sig-param">label</em>, <em class="sig-param">sample_weight=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#HuberLoss.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.HuberLoss.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.loss.HingeLoss"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.loss.</code><code class="sig-name descname">HingeLoss</code><span class="sig-paren">(</span><em class="sig-param">margin=1</em>, <em class="sig-param">weight=None</em>, <em class="sig-param">batch_axis=0</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#HingeLoss"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.HingeLoss" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <a class="reference internal" href="#mxnet.gluon.loss.Loss" title="mxnet.gluon.loss.Loss"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.loss.Loss</span></code></a></p> |
| <p>Calculates the hinge loss function often used in SVMs:</p> |
| <div class="math notranslate nohighlight"> |
| \[L = \sum_i max(0, {margin} - {pred}_i \cdot {label}_i)\]</div> |
| <p><strong>Methods</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.HingeLoss.hybrid_forward" title="mxnet.gluon.loss.HingeLoss.hybrid_forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybrid_forward</span></code></a>(F, pred, label[, sample_weight])</p></td> |
| <td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <p>where <cite>pred</cite> is the classifier prediction and <cite>label</cite> is the target tensor |
| containing values -1 or 1. <cite>label</cite> and <cite>pred</cite> must have the same number of |
| elements.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>margin</strong> (<em>float</em>) – The margin in hinge loss. Defaults to 1.0</p></li> |
| <li><p><strong>weight</strong> (<em>float</em><em> or </em><em>None</em>) – Global scalar weight for loss.</p></li> |
| <li><p><strong>batch_axis</strong> (<em>int</em><em>, </em><em>default 0</em>) – The axis that represents mini-batch.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="simple"> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>pred</strong>: prediction tensor with arbitrary shape.</p></li> |
| <li><p><strong>label</strong>: truth tensor with values -1 or 1. Must have the same size |
| as pred.</p></li> |
| <li><p><strong>sample_weight</strong>: element-wise weighting tensor. Must be broadcastable |
| to the same shape as pred. For example, if pred has shape (64, 10) |
| and you want to weigh each sample in the batch separately, |
| sample_weight should have shape (64, 1).</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul class="simple"> |
| <li><p><strong>loss</strong>: loss tensor with shape (batch_size,). Dimenions other than |
| batch_axis are averaged out.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="method"> |
| <dt id="mxnet.gluon.loss.HingeLoss.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">pred</em>, <em class="sig-param">label</em>, <em class="sig-param">sample_weight=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#HingeLoss.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.HingeLoss.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.loss.SquaredHingeLoss"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.loss.</code><code class="sig-name descname">SquaredHingeLoss</code><span class="sig-paren">(</span><em class="sig-param">margin=1</em>, <em class="sig-param">weight=None</em>, <em class="sig-param">batch_axis=0</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#SquaredHingeLoss"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.SquaredHingeLoss" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <a class="reference internal" href="#mxnet.gluon.loss.Loss" title="mxnet.gluon.loss.Loss"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.loss.Loss</span></code></a></p> |
| <p>Calculates the soft-margin loss function used in SVMs:</p> |
| <div class="math notranslate nohighlight"> |
| \[L = \sum_i max(0, {margin} - {pred}_i \cdot {label}_i)^2\]</div> |
| <p><strong>Methods</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.SquaredHingeLoss.hybrid_forward" title="mxnet.gluon.loss.SquaredHingeLoss.hybrid_forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybrid_forward</span></code></a>(F, pred, label[, sample_weight])</p></td> |
| <td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <p>where <cite>pred</cite> is the classifier prediction and <cite>label</cite> is the target tensor |
| containing values -1 or 1. <cite>label</cite> and <cite>pred</cite> can have arbitrary shape as |
| long as they have the same number of elements.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>margin</strong> (<em>float</em>) – The margin in hinge loss. Defaults to 1.0</p></li> |
| <li><p><strong>weight</strong> (<em>float</em><em> or </em><em>None</em>) – Global scalar weight for loss.</p></li> |
| <li><p><strong>batch_axis</strong> (<em>int</em><em>, </em><em>default 0</em>) – The axis that represents mini-batch.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="simple"> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>pred</strong>: prediction tensor with arbitrary shape</p></li> |
| <li><p><strong>label</strong>: truth tensor with values -1 or 1. Must have the same size |
| as pred.</p></li> |
| <li><p><strong>sample_weight</strong>: element-wise weighting tensor. Must be broadcastable |
| to the same shape as pred. For example, if pred has shape (64, 10) |
| and you want to weigh each sample in the batch separately, |
| sample_weight should have shape (64, 1).</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul class="simple"> |
| <li><p><strong>loss</strong>: loss tensor with shape (batch_size,). Dimenions other than |
| batch_axis are averaged out.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="method"> |
| <dt id="mxnet.gluon.loss.SquaredHingeLoss.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">pred</em>, <em class="sig-param">label</em>, <em class="sig-param">sample_weight=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#SquaredHingeLoss.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.SquaredHingeLoss.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.loss.LogisticLoss"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.loss.</code><code class="sig-name descname">LogisticLoss</code><span class="sig-paren">(</span><em class="sig-param">weight=None</em>, <em class="sig-param">batch_axis=0</em>, <em class="sig-param">label_format='signed'</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#LogisticLoss"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.LogisticLoss" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <a class="reference internal" href="#mxnet.gluon.loss.Loss" title="mxnet.gluon.loss.Loss"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.loss.Loss</span></code></a></p> |
| <p>Calculates the logistic loss (for binary losses only):</p> |
| <div class="math notranslate nohighlight"> |
| \[L = \sum_i \log(1 + \exp(- {pred}_i \cdot {label}_i))\]</div> |
| <p><strong>Methods</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.LogisticLoss.hybrid_forward" title="mxnet.gluon.loss.LogisticLoss.hybrid_forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybrid_forward</span></code></a>(F, pred, label[, sample_weight])</p></td> |
| <td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <p>where <cite>pred</cite> is the classifier prediction and <cite>label</cite> is the target tensor |
| containing values -1 or 1 (0 or 1 if <cite>label_format</cite> is binary). |
| <cite>label</cite> and <cite>pred</cite> can have arbitrary shape as long as they have the same number of elements.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>weight</strong> (<em>float</em><em> or </em><em>None</em>) – Global scalar weight for loss.</p></li> |
| <li><p><strong>batch_axis</strong> (<em>int</em><em>, </em><em>default 0</em>) – The axis that represents mini-batch.</p></li> |
| <li><p><strong>label_format</strong> (<em>str</em><em>, </em><em>default 'signed'</em>) – Can be either ‘signed’ or ‘binary’. If the label_format is ‘signed’, all label values should |
| be either -1 or 1. If the label_format is ‘binary’, all label values should be either |
| 0 or 1.</p></li> |
| <li><p><strong>Inputs</strong> – <ul> |
| <li><p><strong>pred</strong>: prediction tensor with arbitrary shape.</p></li> |
| <li><p><strong>label</strong>: truth tensor with values -1/1 (label_format is ‘signed’) |
| or 0/1 (label_format is ‘binary’). Must have the same size as pred.</p></li> |
| <li><p><strong>sample_weight</strong>: element-wise weighting tensor. Must be broadcastable |
| to the same shape as pred. For example, if pred has shape (64, 10) |
| and you want to weigh each sample in the batch separately, |
| sample_weight should have shape (64, 1).</p></li> |
| </ul> |
| </p></li> |
| <li><p><strong>Outputs</strong> – <ul> |
| <li><p><strong>loss</strong>: loss tensor with shape (batch_size,). Dimenions other than |
| batch_axis are averaged out.</p></li> |
| </ul> |
| </p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="method"> |
| <dt id="mxnet.gluon.loss.LogisticLoss.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">pred</em>, <em class="sig-param">label</em>, <em class="sig-param">sample_weight=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#LogisticLoss.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.LogisticLoss.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.loss.TripletLoss"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.loss.</code><code class="sig-name descname">TripletLoss</code><span class="sig-paren">(</span><em class="sig-param">margin=1</em>, <em class="sig-param">weight=None</em>, <em class="sig-param">batch_axis=0</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#TripletLoss"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.TripletLoss" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <a class="reference internal" href="#mxnet.gluon.loss.Loss" title="mxnet.gluon.loss.Loss"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.loss.Loss</span></code></a></p> |
| <p>Calculates triplet loss given three input tensors and a positive margin. |
| Triplet loss measures the relative similarity between a positive |
| example, a negative example, and prediction:</p> |
| <div class="math notranslate nohighlight"> |
| \[L = \sum_i \max(\Vert {pos_i}_i - {pred} \Vert_2^2 - |
| \Vert {neg_i}_i - {pred} \Vert_2^2 + {margin}, 0)\]</div> |
| <p><strong>Methods</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.TripletLoss.hybrid_forward" title="mxnet.gluon.loss.TripletLoss.hybrid_forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybrid_forward</span></code></a>(F, pred, positive, negative)</p></td> |
| <td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <p><cite>positive</cite>, <cite>negative</cite>, and ‘pred’ can have arbitrary shape as long as they |
| have the same number of elements.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>margin</strong> (<em>float</em>) – Margin of separation between correct and incorrect pair.</p></li> |
| <li><p><strong>weight</strong> (<em>float</em><em> or </em><em>None</em>) – Global scalar weight for loss.</p></li> |
| <li><p><strong>batch_axis</strong> (<em>int</em><em>, </em><em>default 0</em>) – The axis that represents mini-batch.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="simple"> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>pred</strong>: prediction tensor with arbitrary shape</p></li> |
| <li><p><strong>positive</strong>: positive example tensor with arbitrary shape. Must have |
| the same size as pred.</p></li> |
| <li><p><strong>negative</strong>: negative example tensor with arbitrary shape Must have |
| the same size as pred.</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul class="simple"> |
| <li><p><strong>loss</strong>: loss tensor with shape (batch_size,).</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="method"> |
| <dt id="mxnet.gluon.loss.TripletLoss.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">pred</em>, <em class="sig-param">positive</em>, <em class="sig-param">negative</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#TripletLoss.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.TripletLoss.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.loss.PoissonNLLLoss"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.loss.</code><code class="sig-name descname">PoissonNLLLoss</code><span class="sig-paren">(</span><em class="sig-param">weight=None</em>, <em class="sig-param">from_logits=True</em>, <em class="sig-param">batch_axis=0</em>, <em class="sig-param">compute_full=False</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#PoissonNLLLoss"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.PoissonNLLLoss" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <a class="reference internal" href="#mxnet.gluon.loss.Loss" title="mxnet.gluon.loss.Loss"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.loss.Loss</span></code></a></p> |
| <p>For a target (Random Variable) in a Poisson distribution, the function calculates the Negative |
| Log likelihood loss. |
| PoissonNLLLoss measures the loss accrued from a poisson regression prediction made by the model.</p> |
| <div class="math notranslate nohighlight"> |
| \[L = \text{pred} - \text{target} * \log(\text{pred}) +\log(\text{target!})\]</div> |
| <p><strong>Methods</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.PoissonNLLLoss.hybrid_forward" title="mxnet.gluon.loss.PoissonNLLLoss.hybrid_forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybrid_forward</span></code></a>(F, pred, target[, …])</p></td> |
| <td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <p><cite>target</cite>, ‘pred’ can have arbitrary shape as long as they have the same number of elements.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>from_logits</strong> (<em>boolean</em><em>, </em><em>default True</em>) – indicating whether log(predicted) value has already been computed. If True, the loss is computed as |
| <span class="math notranslate nohighlight">\(\exp(\text{pred}) - \text{target} * \text{pred}\)</span>, and if False, then loss is computed as |
| <span class="math notranslate nohighlight">\(\text{pred} - \text{target} * \log(\text{pred}+\text{epsilon})\)</span>.The default value</p></li> |
| <li><p><strong>weight</strong> (<em>float</em><em> or </em><em>None</em>) – Global scalar weight for loss.</p></li> |
| <li><p><strong>batch_axis</strong> (<em>int</em><em>, </em><em>default 0</em>) – The axis that represents mini-batch.</p></li> |
| <li><p><strong>compute_full</strong> (<em>boolean</em><em>, </em><em>default False</em>) – Indicates whether to add an approximation(Stirling factor) for the Factorial term in the formula for the loss. |
| The Stirling factor is: |
| <span class="math notranslate nohighlight">\(\text{target} * \log(\text{target}) - \text{target} + 0.5 * \log(2 * \pi * \text{target})\)</span></p></li> |
| <li><p><strong>epsilon</strong> (<em>float</em><em>, </em><em>default 1e-08</em>) – This is to avoid calculating log(0) which is not defined.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="simple"> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>pred</strong>: Predicted value</p></li> |
| <li><p><strong>target</strong>: Random variable(count or number) which belongs to a Poisson distribution.</p></li> |
| <li><p><strong>sample_weight</strong>: element-wise weighting tensor. Must be broadcastable |
| to the same shape as pred. For example, if pred has shape (64, 10) |
| and you want to weigh each sample in the batch separately, |
| sample_weight should have shape (64, 1).</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul class="simple"> |
| <li><p><strong>loss</strong>: Average loss (shape=(1,1)) of the loss tensor with shape (batch_size,).</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="method"> |
| <dt id="mxnet.gluon.loss.PoissonNLLLoss.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">pred</em>, <em class="sig-param">target</em>, <em class="sig-param">sample_weight=None</em>, <em class="sig-param">epsilon=1e-08</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#PoissonNLLLoss.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.PoissonNLLLoss.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| <dl class="class"> |
| <dt id="mxnet.gluon.loss.CosineEmbeddingLoss"> |
| <em class="property">class </em><code class="sig-prename descclassname">mxnet.gluon.loss.</code><code class="sig-name descname">CosineEmbeddingLoss</code><span class="sig-paren">(</span><em class="sig-param">weight=None</em>, <em class="sig-param">batch_axis=0</em>, <em class="sig-param">margin=0</em>, <em class="sig-param">**kwargs</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#CosineEmbeddingLoss"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.CosineEmbeddingLoss" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Bases: <a class="reference internal" href="#mxnet.gluon.loss.Loss" title="mxnet.gluon.loss.Loss"><code class="xref py py-class docutils literal notranslate"><span class="pre">mxnet.gluon.loss.Loss</span></code></a></p> |
| <p>For a target label 1 or -1, vectors input1 and input2, the function computes the cosine distance |
| between the vectors. This can be interpreted as how similar/dissimilar two input vectors are.</p> |
| <div class="math notranslate nohighlight"> |
| \[\begin{split}L = \sum_i \begin{cases} 1 - {cos\_sim({input1}_i, {input2}_i)} & \text{ if } {label}_i = 1\\ |
| {cos\_sim({input1}_i, {input2}_i)} & \text{ if } {label}_i = -1 \end{cases}\\ |
| cos\_sim(input1, input2) = \frac{{input1}_i.{input2}_i}{||{input1}_i||.||{input2}_i||}\end{split}\]</div> |
| <p><strong>Methods</strong></p> |
| <table class="longtable docutils align-default"> |
| <colgroup> |
| <col style="width: 10%" /> |
| <col style="width: 90%" /> |
| </colgroup> |
| <tbody> |
| <tr class="row-odd"><td><p><a class="reference internal" href="#mxnet.gluon.loss.CosineEmbeddingLoss.hybrid_forward" title="mxnet.gluon.loss.CosineEmbeddingLoss.hybrid_forward"><code class="xref py py-obj docutils literal notranslate"><span class="pre">hybrid_forward</span></code></a>(F, input1, input2, label[, …])</p></td> |
| <td><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p></td> |
| </tr> |
| </tbody> |
| </table> |
| <p><cite>input1</cite>, <cite>input2</cite> can have arbitrary shape as long as they have the same number of elements.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>weight</strong> (<em>float</em><em> or </em><em>None</em>) – Global scalar weight for loss.</p></li> |
| <li><p><strong>batch_axis</strong> (<em>int</em><em>, </em><em>default 0</em>) – The axis that represents mini-batch.</p></li> |
| <li><p><strong>margin</strong> (<em>float</em>) – Margin of separation between correct and incorrect pair.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="simple"> |
| <dt>Inputs:</dt><dd><ul class="simple"> |
| <li><p><strong>input1</strong>: a tensor with arbitrary shape</p></li> |
| <li><p><strong>input2</strong>: another tensor with same shape as pred to which input1 is |
| compared for similarity and loss calculation</p></li> |
| <li><p><strong>label</strong>: A 1-D tensor indicating for each pair input1 and input2, target label is 1 or -1</p></li> |
| <li><p><strong>sample_weight</strong>: element-wise weighting tensor. Must be broadcastable |
| to the same shape as input1. For example, if input1 has shape (64, 10) |
| and you want to weigh each sample in the batch separately, |
| sample_weight should have shape (64, 1).</p></li> |
| </ul> |
| </dd> |
| <dt>Outputs:</dt><dd><ul class="simple"> |
| <li><p><strong>loss</strong>: The loss tensor with shape (batch_size,).</p></li> |
| </ul> |
| </dd> |
| </dl> |
| <dl class="method"> |
| <dt id="mxnet.gluon.loss.CosineEmbeddingLoss.hybrid_forward"> |
| <code class="sig-name descname">hybrid_forward</code><span class="sig-paren">(</span><em class="sig-param">F</em>, <em class="sig-param">input1</em>, <em class="sig-param">input2</em>, <em class="sig-param">label</em>, <em class="sig-param">sample_weight=None</em><span class="sig-paren">)</span><a class="reference internal" href="../../../_modules/mxnet/gluon/loss.html#CosineEmbeddingLoss.hybrid_forward"><span class="viewcode-link">[source]</span></a><a class="headerlink" href="#mxnet.gluon.loss.CosineEmbeddingLoss.hybrid_forward" title="Permalink to this definition">¶</a></dt> |
| <dd><p>Overrides to construct symbolic graph for this <cite>Block</cite>.</p> |
| <dl class="field-list simple"> |
| <dt class="field-odd">Parameters</dt> |
| <dd class="field-odd"><ul class="simple"> |
| <li><p><strong>x</strong> (<a class="reference internal" href="../../symbol/symbol.html#mxnet.symbol.Symbol" title="mxnet.symbol.Symbol"><em>Symbol</em></a><em> or </em><a class="reference internal" href="../../ndarray/ndarray.html#mxnet.ndarray.NDArray" title="mxnet.ndarray.NDArray"><em>NDArray</em></a>) – The first input tensor.</p></li> |
| <li><p><strong>*args</strong> (<em>list of Symbol</em><em> or </em><em>list of NDArray</em>) – Additional input tensors.</p></li> |
| </ul> |
| </dd> |
| </dl> |
| </dd></dl> |
| |
| </dd></dl> |
| |
| </div> |
| |
| |
| </div> |
| <div class="side-doc-outline"> |
| <div class="side-doc-outline--content"> |
| </div> |
| </div> |
| |
| <div class="clearer"></div> |
| </div><div class="pagenation"> |
| <a id="button-prev" href="../data/vision/transforms/index.html" class="mdl-button mdl-js-button mdl-js-ripple-effect mdl-button--colored" role="botton" accesskey="P"> |
| <i class="pagenation-arrow-L fas fa-arrow-left fa-lg"></i> |
| <div class="pagenation-text"> |
| <span class="pagenation-direction">Previous</span> |
| <div>vision.transforms</div> |
| </div> |
| </a> |
| <a id="button-next" href="../model_zoo/index.html" class="mdl-button mdl-js-button mdl-js-ripple-effect mdl-button--colored" role="botton" accesskey="N"> |
| <i class="pagenation-arrow-R fas fa-arrow-right fa-lg"></i> |
| <div class="pagenation-text"> |
| <span class="pagenation-direction">Next</span> |
| <div>gluon.model_zoo.vision</div> |
| </div> |
| </a> |
| </div> |
| <footer class="site-footer h-card"> |
| <div class="wrapper"> |
| <div class="row"> |
| <div class="col-4"> |
| <h4 class="footer-category-title">Resources</h4> |
| <ul class="contact-list"> |
| <li><a class="u-email" href="mailto:dev@mxnet.apache.org">Dev list</a></li> |
| <li><a class="u-email" href="mailto:user@mxnet.apache.org">User mailing list</a></li> |
| <li><a href="https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+Home">Developer Wiki</a></li> |
| <li><a href="https://issues.apache.org/jira/projects/MXNET/issues">Jira Tracker</a></li> |
| <li><a href="https://github.com/apache/incubator-mxnet/labels/Roadmap">Github Roadmap</a></li> |
| <li><a href="https://discuss.mxnet.io">MXNet Discuss forum</a></li> |
| <li><a href="/versions/1.6.0/community/contribute">Contribute To MXNet</a></li> |
| |
| </ul> |
| </div> |
| |
| <div class="col-4"><ul class="social-media-list"><li><a href="https://github.com/apache/incubator-mxnet"><svg class="svg-icon"><use xlink:href="../../../_static/minima-social-icons.svg#github"></use></svg> <span class="username">apache/incubator-mxnet</span></a></li><li><a href="https://www.twitter.com/apachemxnet"><svg class="svg-icon"><use xlink:href="../../../_static/minima-social-icons.svg#twitter"></use></svg> <span class="username">apachemxnet</span></a></li><li><a href="https://youtube.com/apachemxnet"><svg class="svg-icon"><use xlink:href="../../../_static/minima-social-icons.svg#youtube"></use></svg> <span class="username">apachemxnet</span></a></li></ul> |
| </div> |
| |
| <div class="col-4 footer-text"> |
| <p>A flexible and efficient library for deep learning.</p> |
| </div> |
| </div> |
| </div> |
| </footer> |
| |
| <footer class="site-footer2"> |
| <div class="wrapper"> |
| <div class="row"> |
| <div class="col-3"> |
| <img src="../../../_static/apache_incubator_logo.png" class="footer-logo col-2"> |
| </div> |
| <div class="footer-bottom-warning col-9"> |
| <p>Apache MXNet is an effort undergoing incubation at The Apache Software Foundation (ASF), <span style="font-weight:bold">sponsored by the <i>Apache Incubator</i></span>. Incubation is required |
| of all newly accepted projects until a further review indicates that the infrastructure, |
| communications, and decision making process have stabilized in a manner consistent with other |
| successful ASF projects. While incubation status is not necessarily a reflection of the completeness |
| or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF. |
| </p><p>"Copyright © 2017-2018, The Apache Software Foundation Apache MXNet, MXNet, Apache, the Apache |
| feather, and the Apache MXNet project logo are either registered trademarks or trademarks of the |
| Apache Software Foundation."</p> |
| </div> |
| </div> |
| </div> |
| </footer> |
| |
| </body> |
| </html> |