blob: 5fa6a4794476427cce598528715a929b53b5e980 [file] [log] [blame]
{
"cells": [
{
"cell_type": "markdown",
"id": "cbb9f192",
"metadata": {},
"source": [
"<!--- Licensed to the Apache Software Foundation (ASF) under one -->\n",
"<!--- or more contributor license agreements. See the NOTICE file -->\n",
"<!--- distributed with this work for additional information -->\n",
"<!--- regarding copyright ownership. The ASF licenses this file -->\n",
"<!--- to you under the Apache License, Version 2.0 (the -->\n",
"<!--- \"License\"); you may not use this file except in compliance -->\n",
"<!--- with the License. You may obtain a copy of the License at -->\n",
"\n",
"<!--- http://www.apache.org/licenses/LICENSE-2.0 -->\n",
"\n",
"<!--- Unless required by applicable law or agreed to in writing, -->\n",
"<!--- software distributed under the License is distributed on an -->\n",
"<!--- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -->\n",
"<!--- KIND, either express or implied. See the License for the -->\n",
"<!--- specific language governing permissions and limitations -->\n",
"<!--- under the License. -->\n",
"\n",
"# Automatic differentiation with `autograd`\n",
"\n",
"We train models to get better and better as a function of experience. Usually, getting better means minimizing a loss function. To achieve this goal, we often iteratively compute the gradient of the loss with respect to weights and then update the weights accordingly. While the gradient calculations are straightforward through a chain rule, for complex models, working it out by hand can be a pain.\n",
"\n",
"Before diving deep into the model training, let's go through how MXNet’s `autograd` package expedites this work by automatically calculating derivatives.\n",
"\n",
"## Basic usage\n",
"\n",
"Let's first import the `autograd` package."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "40110f12",
"metadata": {},
"outputs": [],
"source": [
"from mxnet import nd\n",
"from mxnet import autograd"
]
},
{
"cell_type": "markdown",
"id": "cde7a7b5",
"metadata": {},
"source": [
"As a toy example, let’s say that we are interested in differentiating a function $f(x) = 2 x^2$ with respect to parameter $x$. We can start by assigning an initial value of $x$."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "01465f5d",
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "3"
}
},
"outputs": [],
"source": [
"x = nd.array([[1, 2], [3, 4]])\n",
"x"
]
},
{
"cell_type": "markdown",
"id": "0b4d66a0",
"metadata": {},
"source": [
"Once we compute the gradient of $f(x)$ with respect to $x$, we’ll need a place to store it. In MXNet, we can tell an NDArray that we plan to store a gradient by invoking its `attach_grad` method."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "98e9ec0a",
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "6"
}
},
"outputs": [],
"source": [
"x.attach_grad()"
]
},
{
"cell_type": "markdown",
"id": "09e30569",
"metadata": {},
"source": [
"Now we’re going to define the function $y=f(x)$. To let MXNet store $y$, so that we can compute gradients later, we need to put the definition inside a `autograd.record()` scope."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "3e3203f0",
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "7"
}
},
"outputs": [],
"source": [
"with autograd.record():\n",
" y = 2 * x * x"
]
},
{
"cell_type": "markdown",
"id": "76f36eb9",
"metadata": {},
"source": [
"Let’s invoke back propagation (backprop) by calling `y.backward()`. When $y$ has more than one entry, `y.backward()` is equivalent to `y.sum().backward()`.\n",
"<!-- I'm not sure what this second part really means. I don't have enough context. TMI?-->"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "9236df5c",
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "8"
}
},
"outputs": [],
"source": [
"y.backward()"
]
},
{
"cell_type": "markdown",
"id": "07bd5e5b",
"metadata": {},
"source": [
"Now, let’s see if this is the expected output. Note that $y=2x^2$ and $\\frac{dy}{dx} = 4x$, which should be `[[4, 8],[12, 16]]`. Let's check the automatically computed results:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "38f8a26e",
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "9"
}
},
"outputs": [],
"source": [
"x.grad"
]
},
{
"cell_type": "markdown",
"id": "e046d349",
"metadata": {},
"source": [
"## Using Python control flows\n",
"\n",
"Sometimes we want to write dynamic programs where the execution depends on some real-time values. MXNet will record the execution trace and compute the gradient as well.\n",
"\n",
"Consider the following function `f`: it doubles the inputs until it's `norm` reaches 1000. Then it selects one element depending on the sum of its elements.\n",
"<!-- I wonder if there could be another less \"mathy\" demo of this -->"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "29efdb2b",
"metadata": {},
"outputs": [],
"source": [
"def f(a):\n",
" b = a * 2\n",
" while b.norm().asscalar() < 1000:\n",
" b = b * 2\n",
" if b.sum().asscalar() >= 0:\n",
" c = b[0]\n",
" else:\n",
" c = b[1]\n",
" return c"
]
},
{
"cell_type": "markdown",
"id": "960f640e",
"metadata": {},
"source": [
"We record the trace and feed in a random value:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f7d1f3aa",
"metadata": {},
"outputs": [],
"source": [
"a = nd.random.uniform(shape=2)\n",
"a.attach_grad()\n",
"with autograd.record():\n",
" c = f(a)\n",
"c.backward()"
]
},
{
"cell_type": "markdown",
"id": "015f39f3",
"metadata": {},
"source": [
"We know that `b` is a linear function of `a`, and `c` is chosen from `b`. Then the gradient with respect to `a` be will be either `[c/a[0], 0]` or `[0, c/a[1]]`, depending on which element from `b` we picked. Let's find the results:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a9a870b8",
"metadata": {},
"outputs": [],
"source": [
"[a.grad, c/a]"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 5
}