blob: dc584158b71a23a520f8848e322d008fbb13b0ed [file] [log] [blame]
{
"cells": [
{
"cell_type": "markdown",
"id": "31c80f86",
"metadata": {},
"source": [
"<!--- Licensed to the Apache Software Foundation (ASF) under one -->\n",
"<!--- or more contributor license agreements. See the NOTICE file -->\n",
"<!--- distributed with this work for additional information -->\n",
"<!--- regarding copyright ownership. The ASF licenses this file -->\n",
"<!--- to you under the Apache License, Version 2.0 (the -->\n",
"<!--- \"License\"); you may not use this file except in compliance -->\n",
"<!--- with the License. You may obtain a copy of the License at -->\n",
"\n",
"<!--- http://www.apache.org/licenses/LICENSE-2.0 -->\n",
"\n",
"<!--- Unless required by applicable law or agreed to in writing, -->\n",
"<!--- software distributed under the License is distributed on an -->\n",
"<!--- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -->\n",
"<!--- KIND, either express or implied. See the License for the -->\n",
"<!--- specific language governing permissions and limitations -->\n",
"<!--- under the License. -->\n",
"\n",
"# PyTorch vs Apache MXNet\n",
"\n",
"[PyTorch](https://pytorch.org/) is a popular deep learning framework due to its easy-to-understand API and its completely imperative approach. Apache MXNet includes the Gluon API which gives you the simplicity and flexibility of PyTorch and allows you to hybridize your network to leverage performance optimizations of the symbolic graph. As of April 2019, [NVidia performance benchmarks](https://developer.nvidia.com/deep-learning-performance-training-inference) show that Apache MXNet outperforms PyTorch by ~77% on training ResNet-50: 10,925 images per second vs. 6,175.\n",
"\n",
"In the next 10 minutes, we'll do a quick comparison between the two frameworks and show how small the learning curve can be when switching from PyTorch to Apache MXNet.\n",
"\n",
"## Installation\n",
"\n",
"PyTorch uses conda for installation by default, for example:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3fdf13be",
"metadata": {},
"outputs": [],
"source": [
"# !conda install pytorch-cpu -c pytorch, torchvision"
]
},
{
"cell_type": "markdown",
"id": "8cc8f4ac",
"metadata": {},
"source": [
"For MXNet we use pip:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f88ad0e6",
"metadata": {},
"outputs": [],
"source": [
"# !pip install mxnet"
]
},
{
"cell_type": "markdown",
"id": "6b26974a",
"metadata": {},
"source": [
"To install Apache MXNet with GPU support, you need to specify CUDA version. For example, the snippet below will install Apache MXNet with CUDA 10.2 support:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e32e54df",
"metadata": {},
"outputs": [],
"source": [
"# !pip install mxnet-cu102"
]
},
{
"cell_type": "markdown",
"id": "070a1b75",
"metadata": {},
"source": [
"## Data manipulation\n",
"\n",
"Both PyTorch and Apache MXNet relies on multidimensional matrices as a data sources. While PyTorch follows Torch's naming convention and refers to multidimensional matrices as \"tensors\", Apache MXNet follows NumPy's conventions and refers to them as \"NDArrays\".\n",
"\n",
"In the code snippets below, we create a two-dimensional matrix where each element is initialized to 1. We show how to add 1 to each element of matrices and print the results.\n",
"\n",
"**PyTorch:**"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "96a01fd1",
"metadata": {},
"outputs": [],
"source": [
"import torch\n",
"\n",
"x = torch.ones(5,3)\n",
"y = x + 1\n",
"y"
]
},
{
"cell_type": "markdown",
"id": "926c98ed",
"metadata": {},
"source": [
"**MXNet:**"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a9d5cef4",
"metadata": {},
"outputs": [],
"source": [
"from mxnet import np\n",
"\n",
"x = np.ones((5,3))\n",
"y = x + 1\n",
"y"
]
},
{
"cell_type": "markdown",
"id": "7d4c8b7c",
"metadata": {},
"source": [
"The main difference apart from the package name is that the MXNet's shape input parameter needs to be passed as a tuple enclosed in parentheses as in NumPy.\n",
"\n",
"Both frameworks support multiple functions to create and manipulate tensors / NDArrays. You can find more of them in the documentation.\n",
"\n",
"## Model training\n",
"\n",
"After covering the basics of data creation and manipulation, let's dive deep and compare how model training is done in both frameworks. In order to do so, we are going to solve image classification task on MNIST data set using Multilayer Perceptron (MLP) in both frameworks. We divide the task in 4 steps.\n",
"\n",
"### 1. Read data\n",
"\n",
"The first step is to obtain the data. We download the MNIST data set from the web and load it into memory so that we can read batches one by one.\n",
"\n",
"**PyTorch:**"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3c2f6459",
"metadata": {},
"outputs": [],
"source": [
"from torchvision import datasets, transforms\n",
"\n",
"trans = transforms.Compose([transforms.ToTensor(),\n",
" transforms.Normalize((0.13,), (0.31,))])\n",
"pt_train_data = torch.utils.data.DataLoader(datasets.MNIST(\n",
" root='.', train=True, download=True, transform=trans),\n",
" batch_size=128, shuffle=True, num_workers=4)"
]
},
{
"cell_type": "markdown",
"id": "aa7370b6",
"metadata": {},
"source": [
"**MXNet:**"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a75114fb",
"metadata": {},
"outputs": [],
"source": [
"from mxnet import gluon\n",
"from mxnet.gluon.data.vision import datasets, transforms\n",
"\n",
"trans = transforms.Compose([transforms.ToTensor(),\n",
" transforms.Normalize(0.13, 0.31)])\n",
"mx_train_data = gluon.data.DataLoader(\n",
" datasets.MNIST(train=True).transform_first(trans),\n",
" batch_size=128, shuffle=True, num_workers=4)"
]
},
{
"cell_type": "markdown",
"id": "145b4ad9",
"metadata": {},
"source": [
"Both frameworks allows you to download MNIST data set from their sources and specify that only training part of the data set is required.\n",
"\n",
"The main difference between the code snippets is that MXNet uses [transform_first](../../../api/gluon/data/index.rst#mxnet.gluon.data.Dataset.transform_first) method to indicate that the data transformation is done on the first element of the data batch, the MNIST picture, rather than the second element, the label.\n",
"\n",
"### 2. Creating the model\n",
"\n",
"Below we define a Multilayer Perceptron (MLP) with a single hidden layer\n",
"and 10 units in the output layer.\n",
"\n",
"**PyTorch:**"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "742503c2",
"metadata": {},
"outputs": [],
"source": [
"import torch.nn as pt_nn\n",
"\n",
"pt_net = pt_nn.Sequential(\n",
" pt_nn.Linear(28*28, 256),\n",
" pt_nn.ReLU(),\n",
" pt_nn.Linear(256, 10))"
]
},
{
"cell_type": "markdown",
"id": "9ef2fbec",
"metadata": {},
"source": [
"**MXNet:**"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c971b166",
"metadata": {},
"outputs": [],
"source": [
"import mxnet.gluon.nn as mx_nn\n",
"\n",
"mx_net = mx_nn.Sequential()\n",
"mx_net.add(mx_nn.Dense(256, activation='relu'),\n",
" mx_nn.Dense(10))\n",
"mx_net.initialize()"
]
},
{
"cell_type": "markdown",
"id": "5b261b97",
"metadata": {},
"source": [
"We used the Sequential container to stack layers one after the other in order to construct the neural network. Apache MXNet differs from PyTorch in the following ways:\n",
"\n",
"* In PyTorch you have to specify the input size as the first argument of the `Linear` object. Apache MXNet provides an extra flexibility to network structure by automatically inferring the input size after the first forward pass.\n",
"\n",
"* In Apache MXNet you can specify activation functions directly in fully connected and convolutional layers.\n",
"\n",
"* After the model structure is defined, Apache MXNet requires you to explicitly call the model initialization function.\n",
"\n",
"With a Sequential block, layers are executed one after the other. To have a different execution model, with PyTorch you can inherit from `nn.Module` and then customize how the `.forward()` function is executed. Similarly, in Apache MXNet you can inherit from [gluon.Block](../../../api/gluon/block.rst#mxnet.gluon.Block) to achieve similar results.\n",
"\n",
"### 3. Loss function and optimization algorithm\n",
"\n",
"The next step is to define the loss function and pick an optimization algorithm. Both PyTorch and Apache MXNet provide multiple options to chose from, and for our particular case we are going to use the cross-entropy loss function and the Stochastic Gradient Descent (SGD) optimization algorithm.\n",
"\n",
"**PyTorch:**"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a9abf395",
"metadata": {},
"outputs": [],
"source": [
"pt_loss_fn = pt_nn.CrossEntropyLoss()\n",
"pt_trainer = torch.optim.SGD(pt_net.parameters(), lr=0.1)"
]
},
{
"cell_type": "markdown",
"id": "59792b38",
"metadata": {},
"source": [
"**MXNet:**"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8c81d23d",
"metadata": {},
"outputs": [],
"source": [
"mx_loss_fn = gluon.loss.SoftmaxCrossEntropyLoss()\n",
"mx_trainer = gluon.Trainer(mx_net.collect_params(),\n",
" 'sgd', {'learning_rate': 0.1})"
]
},
{
"cell_type": "markdown",
"id": "be7d13ea",
"metadata": {},
"source": [
"The code difference between frameworks is small. The main difference is that in Apache MXNet we use [Trainer](../../../api/gluon/trainer.rst) class, which accepts optimization algorithm as an argument. We also use [.collect_params()](../../../api/gluon/block.rst#mxnet.gluon.Block.collect_params) method to get parameters of the network.\n",
"\n",
"### 4. Training\n",
"\n",
"Finally, we implement the training algorithm. Note that the results for each run\n",
"may vary because the weights will get different initialization values and the\n",
"data will be read in a different order due to shuffling.\n",
"\n",
"**PyTorch:**"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a0b01c5b",
"metadata": {},
"outputs": [],
"source": [
"import time\n",
"\n",
"for epoch in range(5):\n",
" total_loss = .0\n",
" tic = time.time()\n",
" for X, y in pt_train_data:\n",
" pt_trainer.zero_grad()\n",
" loss = pt_loss_fn(pt_net(X.view(-1, 28*28)), y)\n",
" loss.backward()\n",
" pt_trainer.step()\n",
" total_loss += loss.mean()\n",
" print('epoch %d, avg loss %.4f, time %.2f' % (\n",
" epoch, total_loss/len(pt_train_data), time.time()-tic))"
]
},
{
"cell_type": "markdown",
"id": "5e58fae7",
"metadata": {},
"source": [
"**MXNet:**"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e5dd793e",
"metadata": {},
"outputs": [],
"source": [
"from mxnet import autograd\n",
"\n",
"for epoch in range(5):\n",
" total_loss = .0\n",
" tic = time.time()\n",
" for X, y in mx_train_data:\n",
" with autograd.record():\n",
" loss = mx_loss_fn(mx_net(X), y)\n",
" loss.backward()\n",
" mx_trainer.step(batch_size=128)\n",
" total_loss += loss.mean().item()\n",
" print('epoch %d, avg loss %.4f, time %.2f' % (\n",
" epoch, total_loss/len(mx_train_data), time.time()-tic))"
]
},
{
"cell_type": "markdown",
"id": "34bb7018",
"metadata": {},
"source": [
"Some of the differences in Apache MXNet when compared to PyTorch are as follows:\n",
"\n",
"* In Apache MXNet, you don't need to flatten the 4-D input into 2-D when feeding the data into forward pass.\n",
"\n",
"* In Apache MXNet, you need to perform the calculation within the [autograd.record()](../../../api/autograd/index.rst#mxnet.autograd.record) scope so that it can be automatically differentiated in the backward pass.\n",
"\n",
"* It is not necessary to clear the gradient every time as with PyTorch's `trainer.zero_grad()` because by default the new gradient is written in, not accumulated.\n",
"\n",
"* You need to specify the update step size (usually batch size) when performing [step()](../../../api/gluon/trainer.rst#mxnet.gluon.Trainer.step) on the trainer.\n",
"\n",
"* You need to call [.item()](../../../api/np/arrays.ndarray.rst#the-n-dimensional-array-ndarray) to turn a multidimensional array into a scalar.\n",
"\n",
"* In this sample, Apache MXNet is twice as fast as PyTorch. Though you need to be cautious with such toy comparisons.\n",
"\n",
"## Conclusion\n",
"\n",
"As we saw above, Apache MXNet Gluon API and PyTorch have many similarities. The main difference lies in terminology (Tensor vs. NDArray) and behavior of accumulating gradients: gradients are accumulated in PyTorch and overwritten in Apache MXNet. The rest of the code is very similar, and it is quite straightforward to move code from one framework to the other.\n",
"\n",
"## Recommended Next Steps\n",
"\n",
"While Apache MXNet Gluon API is very similar to PyTorch, there are some extra functionality that can make your code even faster.\n",
"\n",
"* Check out [Hybridize tutorial](../../packages/gluon/blocks/hybridize.ipynb) to learn how to write imperative code which can be converted to symbolic one.\n",
"\n",
"* Also, check out how to extend Apache MXNet with your own [custom layers](../../packages/gluon/blocks/custom-layer.ipynb).\n",
"\n",
"## Appendix\n",
"\n",
"Below you can find a detailed comparison of various PyTorch functions and their equivalent in Gluon API of Apache MXNet.\n",
"\n",
"### Tensor operation\n",
"\n",
"Here is the list of function names in PyTorch Tensor that are different from Apache MXNet NDArray.\n",
"\n",
"| Function | PyTorch | MXNet Gluon |\n",
"|-------------------------------|-------------------------------------------|-----------------------------------------------------------|\n",
"| Element-wise inverse cosine | `x.acos()` or `torch.acos(x)` | `nd.arccos(x)` |\n",
"| Batch Matrix product and accumulation| `torch.addbmm(M, batch1, batch2)` | `nd.linalg_gemm(M, batch1, batch2)` Leading n-2 dim are reduced |\n",
"| Element-wise division of t1, t2, multiply v, and add t | `torch.addcdiv(t, v, t1, t2)` | `t + v*(t1/t2)` |\n",
"| Matrix product and accumulation| `torch.addmm(M, mat1, mat2)` | `nd.linalg_gemm(M, mat1, mat2)` |\n",
"| Outer-product of two vector add a matrix | `m.addr(vec1, vec2)` | Not available |\n",
"| Element-wise applies function | `x.apply_(calllable)` | Not available, but there is `nd.custom(x, 'op')` |\n",
"| Element-wise inverse sine | `x.asin()` or `torch.asin(x)` | `nd.arcsin(x)` |\n",
"| Element-wise inverse tangent | `x.atan()` or `torch.atan(x)` | `nd.arctan(x)` |\n",
"| Tangent of two tensor | `x.atan2(y)` or `torch.atan2(x, y)` | Not available |\n",
"| batch matrix product | `x.bmm(y)` or `torch.bmm(x, x)` | `nd.linalg_gemm2(x, y)` |\n",
"| Draws a sample from bernoulli distribution | `x.bernoulli()` | Not available |\n",
"| Fills a tensor with number drawn from Cauchy distribution | `x.cauchy_()` | Not available |\n",
"| Splits a tensor in a given dim| `x.chunk(num_of_chunk)` | `nd.split(x, num_outputs=num_of_chunk)` |\n",
"| Limits the values of a tensor to between min and max | `x.clamp(min, max)`| `nd.clip(x, min, max)` |\n",
"| Returns a copy of the tensor | `x.clone()` | `x.copy()` |\n",
"| Cross product | `x.cross(y)` | Not available |\n",
"| Cumulative product along an axis| `x.cumprod(1)` | Not available |\n",
"| Cumulative sum along an axis | `x.cumsum(1)` | Not available |\n",
"| Address of the first element | `x.data_ptr()` | Not available |\n",
"| Creates a diagonal tensor | `x.diag()` | Not available |\n",
"| Computes norm of a tensor | `x.dist()` | `nd.norm(x)` Only calculate L2 norm |\n",
"| Computes Gauss error function | `x.erf()` | Not available |\n",
"| Broadcasts/Expands tensor to new shape | `x.expand(3,4)` | `x.broadcast_to([3, 4])` |\n",
"| Fills a tensor with samples drawn from exponential distribution | `x.exponential_()` | `nd.random_exponential()` |\n",
"| Element-wise mod | `x.fmod(3)` | `nd.module(x, 3)` |\n",
"| Fractional portion of a tensor| `x.frac()` | `x - nd.trunc(x)` |\n",
"| Gathers values along an axis specified by dim | `torch.gather(x, 1, torch.LongTensor([[0,0],[1,0]]))` | `nd.gather_nd(x, nd.array([[[0,0],[1,1]],[[0,0],[1,0]]]))` |\n",
"| Solves least square & least norm | `B.gels(A)` | Not available |\n",
"| Draws from geometirc distribution | `x.geometric_(p)` | Not available |\n",
"| Device context of a tensor | `print(x)` will print which device x is on| `x.context` |\n",
"| Repeats tensor | `x.repeat(4,2)` | `x.tile(4,2)` |\n",
"| Data type of a tensor | `x.type()` | `x.dtype` |\n",
"| Scatter | `torch.zeros(2, 4).scatter_(1, torch.LongTensor([[2], [3]]), 1.23)` | `nd.scatter_nd(nd.array([1.23,1.23]), nd.array([[0,1],[2,3]]), (2,4))` |\n",
"| Returns the shape of a tensor | `x.size()` | `x.shape` |\n",
"| Number of elements in a tensor| `x.numel()` | `x.size` |\n",
"| Returns this tensor as a NumPy ndarray | `x.numpy()` | `x.asnumpy()` |\n",
"| Eigendecomposition for symmetric matrix | `e, v = a.symeig()` | `v, e = nd.linalg.syevd(a)` |\n",
"| Transpose | `x.t()` | `x.T` |\n",
"| Sample uniformly | `torch.uniform_()` | `nd.sample_uniform()` |\n",
"| Inserts a new dimesion | `x.unsqueeze()` | `nd.expand_dims(x)` |\n",
"| Reshape | `x.view(16)` | `x.reshape((16,))` |\n",
"| Veiw as a specified tensor | `x.view_as(y)` | `x.reshape_like(y)` |\n",
"| Returns a copy of the tensor after casting to a specified type | `x.type(type)` | `x.astype(dtype)` |\n",
"| Copies the value of one tensor to another | `dst.copy_(src)` | `src.copyto(dst)` |\n",
"| Returns a zero tensor with specified shape | `x = torch.zeros(2,3)` | `x = nd.zeros((2,3))` |\n",
"| Returns a one tensor with specified shape | `x = torch.ones(2,3)` | `x = nd.ones((2,3)` |\n",
"| Returns a Tensor filled with the scalar value 1, with the same size as input | `y = torch.ones_like(x)` | `y = nd.ones_like(x)` |\n",
"\n",
"### Functional\n",
"\n",
"### GPU\n",
"\n",
"Just like Tensor, MXNet NDArray can be copied to and operated on GPU. This is done by specifying context.\n",
"\n",
"| Function | PyTorch | MXNet Gluon |\n",
"|------------------------|-----------------------------------|----------------------------------------------------------------------------|\n",
"| Copy to GPU | `y = torch.FloatTensor(1).cuda()` | `y = mx.nd.ones((1,), ctx=mx.gpu(0))` |\n",
"| Convert to numpy array | `x = y.cpu().numpy()` | `x = y.asnumpy()` |\n",
"| Context scope | `with torch.cuda.device(1):`<br/>&nbsp;&nbsp;&nbsp;&nbsp;`y= torch.cuda.FloatTensor(1)` | `with mx.gpu(1):`<br/>&nbsp;&nbsp;&nbsp;&nbsp;`y = mx.nd.ones((3,5))` |\n",
"\n",
"### Cross-device\n",
"\n",
"Just like Tensor, MXNet NDArray can be copied across multiple GPUs.\n",
"\n",
"| Function | PyTorch | MXNet Gluon |\n",
"|------------------------|-----------------------------------|----------------------------------------------------------------------------|\n",
"| Copy from GPU 0 to GPU 1 | `x = torch.cuda.FloatTensor(1)`<br/>`y=x.cuda(1)`| `x = mx.nd.ones((1,), ctx=mx.gpu(0))`<br/>`y=x.as_in_context(mx.gpu(1))` |\n",
"| Copy Tensor/NDArray on different GPUs | `y.copy_(x)` | `x.copyto(y)` |\n",
"\n",
"## Autograd\n",
"\n",
"### Variable wrapper vs autograd scope\n",
"\n",
"Autograd package of PyTorch/MXNet enables automatic differentiation of Tensor/NDArray.\n",
"\n",
"| Function | PyTorch | MXNet Gluon |\n",
"|------------------------|-----------------------------------|----------------------------------------------------------------------------|\n",
"| Recording computation | `x = Variable(torch.FloatTensor(1), requires_grad=True)`<br/>`y = x * 2`<br/>`y.backward()` | `x = mx.nd.ones((1,))`<br/>`x.attach_grad()`<br/>`with mx.autograd.record():`<br/>&nbsp;&nbsp;&nbsp;&nbsp;`y = x * 2`<br/>`y.backward()` |\n",
"\n",
"### Scope override (pause, train_mode, predict_mode)\n",
"\n",
"Some operators (Dropout, BatchNorm, etc) behave differently in training and making predictions. This can be controlled with `train_mode` and `predict_mode` scope in MXNet.\n",
"Pause scope is for code that does not need gradients to be calculated.\n",
"\n",
"| Function | PyTorch | MXNet Gluon |\n",
"|------------------------|-----------------------------------|----------------------------------------------------------------------------|\n",
"| Scope override | Not available | `x = mx.nd.ones((1,))`<br/>`with autograd.train_mode():`<br/>&nbsp;&nbsp;&nbsp;&nbsp;`y = mx.nd.Dropout(x)`<br/>&nbsp;&nbsp;&nbsp;&nbsp;`with autograd.predict_mode():`<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;`z = mx.nd.Dropout(y)`<br/><br/>`w = mx.nd.ones((1,))`<br/>`w.attach_grad()`<br/>`with autograd.record():`<br/>&nbsp;&nbsp;&nbsp;&nbsp;`y = x * w`<br/>&nbsp;&nbsp;&nbsp;&nbsp;`y.backward()`<br/>&nbsp;&nbsp;&nbsp;&nbsp;`with autograd.pause():`<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;`w += w.grad` |\n",
"\n",
"### Batch-end synchronization is needed\n",
"\n",
"Apache MXNet uses lazy evaluation to achieve superior performance. The Python thread just pushes the operations into the backend engine and then returns. In training phase batch-end synchronization is needed, e.g, `asnumpy()`, `wait_to_read()`, `metric.update(...)`.\n",
"\n",
"| Function | PyTorch | MXNet Gluon |\n",
"|------------------------|-----------------------------------|----------------------------------------------------------------------------|\n",
"| Batch-end synchronization | Not available | `for (data, label) in train_data:`<br/>&nbsp;&nbsp;&nbsp;&nbsp;`with autograd.record():`<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;`output = net(data)`<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;`L = loss(output, label)`<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;`L.backward()`<br/>&nbsp;&nbsp;&nbsp;&nbsp;`trainer.step(data.shape[0])`<br/>&nbsp;&nbsp;&nbsp;&nbsp;`metric.update([label], [output])` |\n",
"\n",
"## PyTorch module and Gluon blocks\n",
"\n",
"### For new block definition, gluon is similar to PyTorch\n",
"\n",
"\n",
"| Function | PyTorch | MXNet Gluon |\n",
"|------------------------|-----------------------------------|----------------------------------------------------------------------------|\n",
"| New block definition | `class Net(torch.nn.Module):`<br/>&nbsp;&nbsp;&nbsp;&nbsp;`def __init__(self, D_in, D_out):`<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;`super(Net, self).__init__()`<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;`self.linear = torch.nn.Linear(D_in, D_out)`<br/>&nbsp;&nbsp;&nbsp;&nbsp;`def forward(self, x):`<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;`return self.linear(x)` | `class Net(mx.gluon.Block):`<br/>&nbsp;&nbsp;&nbsp;&nbsp;`def __init__(self, D_in, D_out):`<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;`super(Net, self).__init__()`<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;`self.dense=mx.gluon.nn.Dense(D_out, in_units=D_in)`<br/>&nbsp;&nbsp;&nbsp;&nbsp;`def forward(self, x):`<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;`return self.dense(x)` |\n",
"\n",
"### Parameter and Initializer\n",
"\n",
"When creating new layers in PyTorch, you do not need to specify its parameter initializer, and different layers have different default initializer. When you create new layers in Gluon API, you can specify its initializer or just leave it none. The parameters will finish initializing after calling `net.initialize(<init method>)` and all parameters will be initialized in `init method` except those layers whose initializer specified.\n",
"\n",
"| Function | PyTorch | MXNet Gluon |\n",
"|----------------|-------------------|--------------------|\n",
"| Get all parameters | `net.parameters()` | `net.collect_params()` |\n",
"| Initialize network | Not Available | `net.initialize(mx.init.Xavier())` |\n",
"| Specify layer initializer | `layer = torch.nn.Linear(20, 10)`<br/> `torch.nn.init.normal(layer.weight, 0, 0.01)` | `layer = mx.gluon.nn.Dense(10, weight_initializer=mx.init.Normal(0.01))` |\n",
"\n",
"### Usage of existing blocks look alike\n",
"\n",
"| Function | PyTorch | MXNet Gluon |\n",
"|------------------------|-----------------------------------|----------------------------------------------------------------------------|\n",
"| Usage of existing blocks | `y=net(x)` | `y=net(x)` |\n",
"\n",
"### HybridBlock can be hybridized, and allows partial-shape info\n",
"\n",
"HybridBlock supports forwarding with both Symbol and NDArray. After hybridized, HybridBlock will create a symbolic graph representing the forward computation and cache it. Most of the built-in blocks (Dense, Conv2D, MaxPool2D, BatchNorm, etc.) are HybridBlocks.\n",
"\n",
"Instead of explicitly declaring the number of inputs to a layer, we can simply state the number of outputs. The shape will be inferred on the fly once the network is provided with some input.\n",
"\n",
"| Function | PyTorch | MXNet Gluon |\n",
"|------------------------|-----------------------------------|----------------------------------------------------------------------------|\n",
"| partial-shape <br/> hybridized | Not Available | `net = mx.gluon.nn.HybridSequential()`<br/>`net.add(mx.gluon.nn.Dense(10))`<br/>`net.hybridize()` |\n",
"\n",
"### SymbolBlock\n",
"\n",
"SymbolBlock can construct block from symbol. This is useful for using pre-trained models as feature extractors.\n",
"\n",
"| Function | PyTorch | MXNet Gluon |\n",
"|------------------------|-----------------------------------|----------------------------------------------------------------------------|\n",
"| SymbolBlock | Not Available | `alexnet = mx.gluon.model_zoo.vision.alexnet(pretrained=True)`<br/>`out = alexnet(inputs)`<br/>`internals = out.get_internals()`<br/>`outputs = [internals['model_dense0_relu_fwd_output']]`<br/>`feat_model = gluon.SymbolBlock(outputs, inputs, params=alexnet.collect_params())` |\n",
"\n",
"## PyTorch optimizer vs Gluon Trainer\n",
"### For Gluon API calling zero_grad is not necessary most of the time\n",
"`zero_grad` in optimizer (PyTorch) or Trainer (Gluon API) clears the gradients of all parameters. In Gluon API, there is no need to clear the gradients every batch if `grad_req = 'write'`(default).\n",
"\n",
"| Function | PyTorch | MXNet Gluon |\n",
"|------------------------|-----------------------------------|------------------------------------------|\n",
"| clear the gradients | `optm = torch.optim.SGD(model.parameters(), lr=0.1)`<br/>`optm.zero_grad()`<br/>`loss_fn(model(input), target).backward()`<br/>`optm.step()` | `trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': 0.1})`<br/>`with autograd.record():`<br/>&nbsp;&nbsp;&nbsp;&nbsp;`loss = loss_fn(net(data), label)`<br/>`loss.backward()`<br/>`trainer.step(batch_size)` |\n",
"\n",
"### Multi-GPU training\n",
"\n",
"| Function | PyTorch | MXNet Gluon |\n",
"|------------------------|-----------------------------------|------------------------------------------|\n",
"| data parallelism | `net = torch.nn.DataParallel(model, device_ids=[0, 1, 2])`<br/>`output = net(data)` | `ctx = [mx.gpu(i) for i in range(3)]`<br/>`data = gluon.utils.split_and_load(data, ctx)`<br/>`label = gluon.utils.split_and_load(label, ctx)`<br/>`with autograd.record():`<br/>&nbsp;&nbsp;&nbsp;&nbsp;`losses = [loss(net(X), Y) for X, Y in zip(data, label)]`<br/>`for l in losses:`<br/>&nbsp;&nbsp;&nbsp;&nbsp;`l.backward()` |\n",
"\n",
"### Distributed training\n",
"\n",
"| Function | Pytorch | MXNet Gluon |\n",
"|------------------------|-----------------------------------|------------------------------------------|\n",
"| distributed data parallelism | `torch.distributed.init_process_group(...)`<br/>`model = torch.nn.parallel.distributedDataParallel(model, ...)` | `store = kv.create('dist')`<br/>`trainer = gluon.Trainer(net.collect_params(), ..., kvstore=store)` |\n",
"\n",
"## Monitoring\n",
"\n",
"### Apache MXNet has pre-defined metrics\n",
"\n",
"Gluon provide several predefined metrics which can online evaluate the performance of a learned model.\n",
"\n",
"| Function | PyTorch | MXNet Gluon |\n",
"|------------------------|-----------------------------------|------------------------------------------|\n",
"| metric | Not available | `metric = mx.metric.Accuracy()`<br/>`with autograd.record():`<br/>&nbsp;&nbsp;&nbsp;&nbsp;`output = net(data)`<br/>&nbsp;&nbsp;&nbsp;&nbsp;`L = loss(ouput, label)`<br/>&nbsp;&nbsp;&nbsp;&nbsp;`loss(ouput, label).backward()`<br/>`trainer.step(batch_size)`<br/>`metric.update(label, output)` |\n",
"\n",
"### Data visualization\n",
"\n",
"TensorboardX (PyTorch) and [MXBoard](https://github.com/awslabs/mxboard) (MXNet) can be used to visualize your network and plot quantitative metrics about the execution of your graph.\n",
"\n",
"| PyTorch | MXNet |\n",
"| ---------------------------------------------- | ---------------------------------------------- |\n",
"| `sw = tensorboardX.SummaryWriter()` | `sw = mxboard.SummaryWriter()` |\n",
"| `...` | `...` |\n",
"| `for name, param in model.named_parameters():` | `for name, param in net.collect_params():` |\n",
"| ` grad = param.clone().cpu().data.numpy()` | ` grad = param.grad.asnumpy().flatten()` |\n",
"| ` sw.add_histogram(name, grad, n_iter)` | ` sw.add_histogram(tag=str(param),` |\n",
"| `...` | ` values=grad,` |\n",
"| `sw.close()` | ` bins=200,` |\n",
"| | ` global_step=i)` |\n",
"| | `...` |\n",
"| | `sw.close()` |\n",
"\n",
"## I/O and deploy\n",
"\n",
"### Data loading\n",
"\n",
"`Dataset` and `DataLoader` are the basic components for loading data.\n",
"\n",
"| Class | PyTorch | MXNet Gluon |\n",
"|------------------------|-----------------------------------|------------------------------------------|\n",
"| Dataset holding arrays | `torch.utils.data.TensorDataset(data_tensor, label_tensor)`| `gluon.data.ArrayDataset(data_array, label_array)` |\n",
"| Data loader | `torch.utils.data.DataLoader(dataset, batch_size=1, shuffle=False, sampler=None, batch_sampler=None, num_workers=0, collate_fn=<function default_collate>, drop_last=False)` | `gluon.data.DataLoader(dataset, batch_size=None, shuffle=False, sampler=None, last_batch='keep', batch_sampler=None, batchify_fn=None, num_workers=0)`|\n",
"| Sequentially applied sampler | `torch.utils.data.sampler.SequentialSampler(data_source)` | `gluon.data.SequentialSampler(length)` |\n",
"| Random order sampler | `torch.utils.data.sampler.RandomSampler(data_source)` | `gluon.data.RandomSampler(length)`|\n",
"\n",
"Some commonly used datasets for computer vision are provided in `mx.gluon.data.vision` package.\n",
"\n",
"| Class | PyTorch | MXNet Gluon |\n",
"|------------------------|-----------------------------------|------------------------------------------|\n",
"| MNIST handwritten digits dataset. | `torchvision.datasets.MNIST`| `mx.gluon.data.vision.MNIST` |\n",
"| CIFAR10 Dataset. | `torchvision.datasets.CIFAR10` | `mx.gluon.data.vision.CIFAR10`|\n",
"| CIFAR100 Dataset. | `torchvision.datasets.CIFAR100` | `mx.gluon.data.vision.CIFAR100` |\n",
"| A generic data loader where the images are arranged in folders. | `torchvision.datasets.ImageFolder(root, transform=None, target_transform=None, loader=<function default_loader>)` | `mx.gluon.data.vision.ImageFolderDataset(root, flag, transform=None)`|\n",
"\n",
"### Serialization\n",
"\n",
"Serialization and deserialization are achieved by calling `save_parameters` and `load_parameters`.\n",
"\n",
"| Class | PyTorch | MXNet Gluon |\n",
"|------------------------|-----------------------------------|------------------------------------------|\n",
"| Save model parameters | `torch.save(the_model.state_dict(), filename)`| `model.save_parameters(filename)`|\n",
"| Load parameters | `the_model.load_state_dict(torch.load(PATH))` | `model.load_parameters(filename, ctx, allow_missing=False, ignore_extra=False)` |"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 5
}