blob: 2e8f8ce367bf5f6ec394f687fa51392972d7817c [file] [log] [blame]
{
"cells": [
{
"cell_type": "markdown",
"id": "1835eb61",
"metadata": {},
"source": [
"<!--- Licensed to the Apache Software Foundation (ASF) under one -->\n",
"<!--- or more contributor license agreements. See the NOTICE file -->\n",
"<!--- distributed with this work for additional information -->\n",
"<!--- regarding copyright ownership. The ASF licenses this file -->\n",
"<!--- to you under the Apache License, Version 2.0 (the -->\n",
"<!--- \"License\"); you may not use this file except in compliance -->\n",
"<!--- with the License. You may obtain a copy of the License at -->\n",
"\n",
"<!--- http://www.apache.org/licenses/LICENSE-2.0 -->\n",
"\n",
"<!--- Unless required by applicable law or agreed to in writing, -->\n",
"<!--- software distributed under the License is distributed on an -->\n",
"<!--- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -->\n",
"<!--- KIND, either express or implied. See the License for the -->\n",
"<!--- specific language governing permissions and limitations -->\n",
"<!--- under the License. -->\n",
"\n",
"# Parameter Management\n",
"\n",
"The ultimate goal of training deep neural networks is finding good parameter values for a given architecture. The [nn.Sequential](../../../../api/gluon/nn/index.rst#mxnet.gluon.nn.Sequential) class is a perfect tool to work with standard models. However, very few models are entirely standard, and most scientists want to build novel things, which requires working with model parameters.\n",
"\n",
"This section shows how to manipulate parameters. In particular we will cover the following aspects:\n",
"\n",
"* How to access parameters in order to debug, diagnose, visualize or save them. It is the first step to understand how to work with custom models.\n",
"* We will learn how to set parameters to specific values, e.g. how to initialize them. We will discuss the structure of parameter initializers.\n",
"* We will show how this knowledge can be used to build networks that share some parameters.\n",
"\n",
"As always, we start with a Multilayer Perceptron with a single hidden layer. We will use it to demonstrate the aspects mentioned above."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "6f7d1dae",
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "1"
}
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"[04:46:08] /work/mxnet/src/storage/storage.cc:202: Using Pooled (Naive) StorageManager for CPU\n"
]
},
{
"data": {
"text/plain": [
"array([[-0.01560277, -0.06336804, -0.04376109, 0.05757218, -0.10912388,\n",
" -0.10655528, 0.0128617 , -0.06423943, 0.05268409, -0.09071875],\n",
" [ 0.01391386, -0.04640213, -0.06453254, 0.0399485 , -0.08094363,\n",
" -0.06119407, -0.00945095, -0.04769442, -0.02566512, -0.05020918]])"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from mxnet import init, np\n",
"from mxnet.gluon import nn\n",
"\n",
"\n",
"net = nn.Sequential()\n",
"net.add(nn.Dense(256, activation='relu'))\n",
"net.add(nn.Dense(10))\n",
"net.initialize() # Use the default initialization method\n",
"\n",
"x = np.random.uniform(size=(2, 20))\n",
"net(x) # Forward computation"
]
},
{
"cell_type": "markdown",
"id": "f6c5529e",
"metadata": {},
"source": [
"## Parameter Access\n",
"\n",
"In case of a Sequential class we can access the parameters simply by indexing each layer of the network. The `params` variable contains the required data. Let's try this out in practice by inspecting the parameters of the first layer."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "cfc71f28",
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "2"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'0.weight': Parameter (shape=(256, 20), dtype=float32), '0.bias': Parameter (shape=(256,), dtype=float32), '1.weight': Parameter (shape=(10, 256), dtype=float32), '1.bias': Parameter (shape=(10,), dtype=float32)}\n"
]
}
],
"source": [
"print(net.collect_params())"
]
},
{
"cell_type": "markdown",
"id": "c6e42243",
"metadata": {},
"source": [
"From the output we can see that the layer consists of two sets of parameters: `0.weight` and `0.bias`. They are both single precision and they have the necessary shapes that we would expect from the first layer, given that the input dimension is 20 and the output dimension 256. The names of the parameters are very useful, because they allow us to identify parameters *uniquely* even in a network of hundreds of layers and with nontrivial structure. The second layer is structured in a similar way.\n",
"\n",
"### Targeted Parameters\n",
"\n",
"In order to do something useful with the parameters we need to access them. There are several ways to do this, ranging from simple to general. Let's look at some of them."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "d18ffa0a",
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "3"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Parameter (shape=(10,), dtype=float32)\n",
"[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n"
]
}
],
"source": [
"print(net[1].bias)\n",
"print(net[1].bias.data())"
]
},
{
"cell_type": "markdown",
"id": "f61b4012",
"metadata": {},
"source": [
"The first line returns the bias of the second layer. Since this is an object containing data, gradients, and additional information, we need to request the data explicitly. To request the data, we call `data` method on the parameter on the second line. Note that the bias is all 0 since we initialized the bias to contain all zeros.\n",
"\n",
"We can also access the parameter by name, such as `0.weight`. This is possible since each layer comes with its own parameter dictionary that can be accessed directly. Both methods are entirely equivalent, but the first method leads to more readable code."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "f6604975",
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "4"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Parameter (shape=(256, 20), dtype=float32)\n",
"[[-0.01212035 -0.05374379 0.04984665 ... -0.04300905 0.05797013\n",
" 0.03056206]\n",
" [ 0.04715079 0.06293494 -0.00091191 ... 0.05132817 0.04056697\n",
" -0.0134289 ]\n",
" [-0.05758242 0.01202678 -0.01845955 ... 0.04554842 -0.0192279\n",
" 0.04583725]\n",
" ...\n",
" [ 0.00876342 0.06534793 -0.00538377 ... 0.04401228 0.01607978\n",
" 0.06334015]\n",
" [-0.03986076 0.03499746 0.01426854 ... -0.06219698 -0.03732041\n",
" 0.01419816]\n",
" [ 0.02922095 -0.02636104 -0.03194058 ... -0.00321652 -0.03190077\n",
" 0.05440574]]\n"
]
}
],
"source": [
"print(net[0].params['weight'])\n",
"print(net[0].params['weight'].data())"
]
},
{
"cell_type": "markdown",
"id": "e2e41290",
"metadata": {},
"source": [
"Note that the weights are nonzero as they were randomly initialized when we constructed the network.\n",
"\n",
"[data](../../../../api/gluon/parameter.rst#mxnet.gluon.Parameter.data) is not the only method that we can invoke. For instance, we can compute the gradient with respect to the parameters. It has the same shape as the weight. However, since we did not invoke backpropagation yet, the values are all 0."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "383111f1",
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "5"
}
},
"outputs": [
{
"data": {
"text/plain": [
"array([[0., 0., 0., ..., 0., 0., 0.],\n",
" [0., 0., 0., ..., 0., 0., 0.],\n",
" [0., 0., 0., ..., 0., 0., 0.],\n",
" ...,\n",
" [0., 0., 0., ..., 0., 0., 0.],\n",
" [0., 0., 0., ..., 0., 0., 0.],\n",
" [0., 0., 0., ..., 0., 0., 0.]])"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"net[0].weight.grad()"
]
},
{
"cell_type": "markdown",
"id": "28c5497f",
"metadata": {},
"source": [
"### All Parameters at Once\n",
"\n",
"Accessing parameters as described above can be a bit tedious, in particular if we have more complex blocks, or blocks of blocks (or even blocks of blocks of blocks), since we need to walk through the entire tree in reverse order to learn how the blocks were constructed. To avoid this, blocks come with a method [collect_params](../../../../api/gluon/block.rst#mxnet.gluon.Block.collect_params) which grabs all parameters of a network in one dictionary such that we can traverse it with ease. It does so by iterating over all constituents of a block and calls `collect_params` on sub-blocks as needed. To see the difference, consider the following:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "84bfbc81",
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "6"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'weight': Parameter (shape=(256, 20), dtype=float32), 'bias': Parameter (shape=(256,), dtype=float32)}\n",
"{'0.weight': Parameter (shape=(256, 20), dtype=float32), '0.bias': Parameter (shape=(256,), dtype=float32), '1.weight': Parameter (shape=(10, 256), dtype=float32), '1.bias': Parameter (shape=(10,), dtype=float32)}\n"
]
}
],
"source": [
"# Parameters only for the first layer\n",
"print(net[0].collect_params())\n",
"# Parameters of the entire network\n",
"print(net.collect_params())"
]
},
{
"cell_type": "markdown",
"id": "9388bc84",
"metadata": {},
"source": [
"This provides us with the third way of accessing the parameters of the network. If we want to get the value of the bias term of the second layer we could simply use this:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "92f1dc27",
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "7"
}
},
"outputs": [
{
"data": {
"text/plain": [
"array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"net.collect_params()['1.bias'].data()"
]
},
{
"cell_type": "markdown",
"id": "c9060e24",
"metadata": {},
"source": [
"By adding a regular expression as an argument to `collect_params` method, we can select only a particular set of parameters whose names are matched by the regular expression."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "61945039",
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "8"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'0.weight': Parameter (shape=(256, 20), dtype=float32), '1.weight': Parameter (shape=(10, 256), dtype=float32)}\n",
"{'0.weight': Parameter (shape=(256, 20), dtype=float32), '0.bias': Parameter (shape=(256,), dtype=float32)}\n"
]
}
],
"source": [
"print(net.collect_params('.*weight'))\n",
"print(net.collect_params('0.*'))"
]
},
{
"cell_type": "markdown",
"id": "8d593320",
"metadata": {},
"source": [
"### Rube Goldberg strikes again\n",
"\n",
"Let's see how the parameter naming conventions work if we nest multiple blocks inside each other. For that we first define a function that produces blocks (a block factory, so to speak) and then we combine these inside yet larger blocks."
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "72d98e4c",
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "20"
}
},
"outputs": [
{
"data": {
"text/plain": [
"array([[ 9.0999608e-09, -3.5124164e-09, -2.1772841e-09, 4.7371032e-09,\n",
" -6.0350844e-09, -3.3993408e-10, -2.9719969e-09, 5.7443899e-09,\n",
" -1.7375938e-09, 2.6284099e-09],\n",
" [ 5.7530261e-09, -3.0763021e-09, -3.4435163e-10, 2.1423765e-09,\n",
" -3.9806052e-09, -3.4428879e-10, -3.2744367e-09, 2.1464188e-09,\n",
" 1.7963833e-09, 3.3782046e-09]])"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"def block1():\n",
" net = nn.Sequential()\n",
" net.add(nn.Dense(32, activation='relu'))\n",
" net.add(nn.Dense(16, activation='relu'))\n",
" return net\n",
"\n",
"def block2():\n",
" net = nn.Sequential()\n",
" for i in range(4):\n",
" net.add(block1())\n",
" return net\n",
"\n",
"rgnet = nn.Sequential()\n",
"rgnet.add(block2())\n",
"rgnet.add(nn.Dense(10))\n",
"rgnet.initialize()\n",
"rgnet(x)"
]
},
{
"cell_type": "markdown",
"id": "b1c5361a",
"metadata": {},
"source": [
"Now that we are done designing the network, let's see how it is organized. `collect_params` provides us with this information, both in terms of naming and in terms of logical structure."
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "3148be70",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"<bound method Block.collect_params of Sequential(\n",
" (0): Sequential(\n",
" (0): Sequential(\n",
" (0): Dense(20 -> 32, Activation(relu))\n",
" (1): Dense(32 -> 16, Activation(relu))\n",
" )\n",
" (1): Sequential(\n",
" (0): Dense(16 -> 32, Activation(relu))\n",
" (1): Dense(32 -> 16, Activation(relu))\n",
" )\n",
" (2): Sequential(\n",
" (0): Dense(16 -> 32, Activation(relu))\n",
" (1): Dense(32 -> 16, Activation(relu))\n",
" )\n",
" (3): Sequential(\n",
" (0): Dense(16 -> 32, Activation(relu))\n",
" (1): Dense(32 -> 16, Activation(relu))\n",
" )\n",
" )\n",
" (1): Dense(16 -> 10, linear)\n",
")>\n",
"{'0.0.0.weight': Parameter (shape=(32, 20), dtype=float32), '0.0.0.bias': Parameter (shape=(32,), dtype=float32), '0.0.1.weight': Parameter (shape=(16, 32), dtype=float32), '0.0.1.bias': Parameter (shape=(16,), dtype=float32), '0.1.0.weight': Parameter (shape=(32, 16), dtype=float32), '0.1.0.bias': Parameter (shape=(32,), dtype=float32), '0.1.1.weight': Parameter (shape=(16, 32), dtype=float32), '0.1.1.bias': Parameter (shape=(16,), dtype=float32), '0.2.0.weight': Parameter (shape=(32, 16), dtype=float32), '0.2.0.bias': Parameter (shape=(32,), dtype=float32), '0.2.1.weight': Parameter (shape=(16, 32), dtype=float32), '0.2.1.bias': Parameter (shape=(16,), dtype=float32), '0.3.0.weight': Parameter (shape=(32, 16), dtype=float32), '0.3.0.bias': Parameter (shape=(32,), dtype=float32), '0.3.1.weight': Parameter (shape=(16, 32), dtype=float32), '0.3.1.bias': Parameter (shape=(16,), dtype=float32), '1.weight': Parameter (shape=(10, 16), dtype=float32), '1.bias': Parameter (shape=(10,), dtype=float32)}\n"
]
}
],
"source": [
"print(rgnet.collect_params)\n",
"print(rgnet.collect_params())"
]
},
{
"cell_type": "markdown",
"id": "73392dd8",
"metadata": {},
"source": [
"We can access layers following the hierarchy in which they are structured. For instance, if we want to access the bias of the first layer of the second subblock of the first major block, we could perform the following:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "bd2c96ec",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n",
" 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"rgnet[0][1][0].bias.data()"
]
},
{
"cell_type": "markdown",
"id": "a2d39dbc",
"metadata": {},
"source": [
"### Saving and loading parameters\n",
"\n",
"In order to save parameters, we can use [save_parameters](../../../../api/gluon/block.rst#mxnet.gluon.Block.save_parameters) method on the whole network or a particular subblock. The only parameter that is needed is the `file_name`. In a similar way, we can load parameters back from the file. We use [load_parameters](../../../../api/gluon/block.rst#mxnet.gluon.Block.load_parameters) method for that:"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "824aecd0",
"metadata": {},
"outputs": [],
"source": [
"rgnet.save_parameters('model.params')\n",
"rgnet.load_parameters('model.params')"
]
},
{
"cell_type": "markdown",
"id": "d6864d34",
"metadata": {},
"source": [
"## Parameter Initialization\n",
"\n",
"Now that we know how to access the parameters, let's look at how to initialize them properly. By default, MXNet initializes the weight matrices uniformly by drawing from $U[-0.07, 0.07]$ and the bias parameters are all set to $0$. However, we often need to use other methods to initialize the weights. MXNet's [init](../../../../api/initializer/index.rst) module provides a variety of preset initialization methods, but if we want something unusual, we need to do a bit of extra work.\n",
"\n",
"### Built-in Initialization\n",
"\n",
"Let's begin with the built-in initializers. The code below initializes all parameters with Gaussian random variables."
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "58eddef4",
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "9"
}
},
"outputs": [
{
"data": {
"text/plain": [
"array([ 0.00049951, -0.00416777, -0.00443468, 0.00853858, 0.00714435,\n",
" 0.00273024, 0.00608095, -0.0041742 , 0.02138895, 0.00299026,\n",
" 0.0148234 , -0.00553365, 0.00124036, -0.00121287, -0.01600852,\n",
" -0.00607758, -0.00800275, 0.01979822, -0.00506664, -0.00186143])"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# force_reinit ensures that the variables are initialized again,\n",
"# regardless of whether they were already initialized previously\n",
"net.initialize(init=init.Normal(sigma=0.01), force_reinit=True)\n",
"net[0].weight.data()[0]"
]
},
{
"cell_type": "markdown",
"id": "bbbf7c06",
"metadata": {},
"source": [
"If we wanted to initialize all parameters to 1, we could do this simply by changing the initializer to [Constant(1)](../../../../api/initializer/index.rst#mxnet.initializer.Constant)."
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "8e3684cd",
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "10"
}
},
"outputs": [
{
"data": {
"text/plain": [
"array([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n",
" 1., 1., 1.])"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"net.initialize(init=init.Constant(1), force_reinit=True)\n",
"net[0].weight.data()[0]"
]
},
{
"cell_type": "markdown",
"id": "a71772dd",
"metadata": {},
"source": [
"If we want to initialize only a specific parameter in a different manner, we can simply set the initializer only for the appropriate subblock (or parameter) for that matter. For instance, below we initialize the second layer to a constant value of 42 and we use the [Xavier](../../../../api/initializer/index.rst#mxnet.initializer.Xavier) initializer for the weights of the first layer."
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "aaa9f0fc",
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "11"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"42.0\n",
"[-8.6784363e-05 1.4604107e-01 1.1358139e-01 2.5852650e-02\n",
" 1.3344720e-01 1.1060861e-01 8.2233369e-02 1.1406082e-01\n",
" -1.3995498e-02 1.2004420e-02 -1.0967357e-01 1.0333490e-01\n",
" 4.0787160e-03 -8.0248415e-02 1.0142967e-01 -1.9839540e-02\n",
" -6.3506939e-02 1.2286544e-01 -1.3792697e-01 -1.3527359e-01]\n"
]
}
],
"source": [
"net[1].initialize(init=init.Constant(42), force_reinit=True)\n",
"net[0].weight.initialize(init=init.Xavier(), force_reinit=True)\n",
"print(net[1].weight.data()[0,0])\n",
"print(net[0].weight.data()[0])"
]
},
{
"cell_type": "markdown",
"id": "020a4ef7",
"metadata": {},
"source": [
"### Custom Initialization\n",
"\n",
"Sometimes, the initialization methods we need are not provided in the `init` module. If this is the case, we can implement a subclass of the [Initializer](../../../../api/initializer/index.rst#mxnet.initializer.Initializer) class so that we can use it like any other initialization method. Usually, we only need to implement the `_init_weight` method and modify the incoming NDArray according to the initial result. In the example below, we pick a nontrivial distribution, just to prove the point. We draw the coefficients from the following distribution:\n",
"\n",
"$$\n",
"\\begin{aligned}\n",
" w \\sim \\begin{cases}\n",
" U[5, 10] & \\text{ with probability } \\frac{1}{4} \\\\\n",
" 0 & \\text{ with probability } \\frac{1}{2} \\\\\n",
" U[-10, -5] & \\text{ with probability } \\frac{1}{4}\n",
" \\end{cases}\n",
"\\end{aligned}\n",
"$$"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "722eb3e9",
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "12"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Init weight (256, 20)\n",
"Init weight (10, 256)\n"
]
},
{
"data": {
"text/plain": [
"array([ 0. , -0. , 8.958464, 0. , 0. , 0. ,\n",
" -0. , -0. , 0. , -8.722489, -0. , 0. ,\n",
" -0. , 0. , 9.477695, 9.403345, 9.750938, -0. ,\n",
" -0. , -0. ])"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"class MyInit(init.Initializer):\n",
" def _init_weight(self, name, data):\n",
" print('Init', name, data.shape)\n",
" data[:] = np.random.uniform(low=-10, high=10, size=data.shape)\n",
" data *= np.abs(data) >= 5\n",
"\n",
"net.initialize(MyInit(), force_reinit=True)\n",
"net[0].weight.data()[0]"
]
},
{
"cell_type": "markdown",
"id": "1f8ca321",
"metadata": {},
"source": [
"If even this functionality is insufficient, we can set parameters directly. Since `data()` returns an NDArray we can access it just like any other matrix. A note for advanced users - if you want to adjust parameters within an [autograd](../../../../api/autograd/index.rst) scope you need to use [set_data](../../../../api/gluon/parameter.rst#mxnet.gluon.Parameter.set_data) to avoid confusing the automatic differentiation mechanics."
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "020cc488",
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "13"
}
},
"outputs": [
{
"data": {
"text/plain": [
"array([42. , 1. , 9.958464 , 1. , 1. ,\n",
" 1. , 1. , 1. , 1. , -7.7224894,\n",
" 1. , 1. , 1. , 1. , 10.477695 ,\n",
" 10.403345 , 10.750938 , 1. , 1. , 1. ])"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"net[0].weight.data()[:] += 1\n",
"net[0].weight.data()[0,0] = 42\n",
"net[0].weight.data()[0]"
]
},
{
"cell_type": "markdown",
"id": "42504986",
"metadata": {},
"source": [
"## Tied Parameters\n",
"\n",
"In some cases, we want to share model parameters across multiple layers. For instance, when we want to find good word embeddings we may decide to use the same parameters both for encoding and decoding of words. In the code below, we allocate a dense layer and then use its parameters specifically to set those of another layer."
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "b36f90bd",
"metadata": {
"attributes": {
"classes": [],
"id": "",
"n": "14"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[ True True True True True True True True]\n",
"[ True True True True True True True True]\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"/work/mxnet/python/mxnet/util.py:755: UserWarning: Parameter 'bias' is already initialized, ignoring. Set force_reinit=True to re-initialize.\n",
" return func(*args, **kwargs)\n"
]
}
],
"source": [
"net = nn.Sequential()\n",
"# We need to give the shared layer a name such that we can reference\n",
"# its parameters\n",
"shared = nn.Dense(8, activation='relu')\n",
"net.add(nn.Dense(8, activation='relu'),\n",
" shared,\n",
" nn.Dense(8, activation='relu').share_parameters(shared.params),\n",
" nn.Dense(10))\n",
"net.initialize()\n",
"\n",
"x = np.random.uniform(size=(2, 20))\n",
"net(x)\n",
"\n",
"# Check whether the parameters are the same\n",
"print(net[1].weight.data()[0] == net[2].weight.data()[0])\n",
"net[1].weight.data()[0,0] = 100\n",
"# And make sure that they're actually the same object rather\n",
"# than just having the same value\n",
"print(net[1].weight.data()[0] == net[2].weight.data()[0])"
]
},
{
"cell_type": "markdown",
"id": "50c296a1",
"metadata": {},
"source": [
"The above example shows that the parameters of the second and third layer are tied. They are identical rather than just being equal. That is, by changing one of the parameters the other one changes, too. What happens to the gradients is quite ingenious. Since the model parameters contain gradients, the gradients of the second hidden layer and the third hidden layer are accumulated in the [shared.params.grad()](../../../../api/gluon/parameter.rst#mxnet.gluon.Parameter.grad) during backpropagation."
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 5
}