blob: d5ac68bd27576f2fd4c7716a9cfeb1dd0c223262 [file] [log] [blame]
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Matrix Factorization (MF) part 2: Getting Fancy\n",
"Demonstrates matrix factorization with MXNet on the [MovieLens 100k](http://grouplens.org/datasets/movielens/100k/) dataset. This is an extension of [part 1](demo1-MF.ipynb) where we try fancy optimizers and network structures.\n",
"\n",
"You need to have python package pandas and bokeh installed (pip install pandas bokeh)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import mxnet as mx\n",
"from movielens_data import get_data_iter, max_id\n",
"from matrix_fact import train"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"# If MXNet is not compiled with GPU support (e.g. on OSX), set to [mx.cpu(0)]\n",
"# Can be changed to [mx.gpu(0), mx.gpu(1), ..., mx.gpu(N-1)] if there are N GPUs\n",
"ctx = [mx.gpu(0)]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"train_test_data = get_data_iter(batch_size=100)\n",
"max_user, max_item = max_id('./ml-100k/u.data')\n",
"(max_user, max_item)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Linear MF\n",
"Same as before, but this time with the [Adam optimizer](https://arxiv.org/abs/1412.6980) which will often converge much faster than SGD w/ momentum as we used before. You should see this model over-fitting quickly. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"def plain_net(k):\n",
" # input\n",
" user = mx.symbol.Variable('user')\n",
" item = mx.symbol.Variable('item')\n",
" score = mx.symbol.Variable('score')\n",
" # user feature lookup\n",
" user = mx.symbol.Embedding(data = user, input_dim = max_user, output_dim = k) \n",
" # item feature lookup\n",
" item = mx.symbol.Embedding(data = item, input_dim = max_item, output_dim = k)\n",
" # predict by the inner product, which is elementwise product and then sum\n",
" pred = user * item\n",
" pred = mx.symbol.sum_axis(data = pred, axis = 1)\n",
" pred = mx.symbol.Flatten(data = pred)\n",
" # loss layer\n",
" pred = mx.symbol.LinearRegressionOutput(data = pred, label = score)\n",
" return pred\n",
"\n",
"net1 = plain_net(64)\n",
"mx.viz.plot_network(net1)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"scrolled": false
},
"outputs": [],
"source": [
"results1 = train(net1, train_test_data, num_epoch=25, learning_rate=0.001, optimizer='adam', ctx=ctx)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Neural Network (non-linear) MF\n",
"The non-linear model converges strangely with Adam."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"scrolled": false
},
"outputs": [],
"source": [
"def get_one_layer_mlp(hidden, k):\n",
" # input\n",
" user = mx.symbol.Variable('user')\n",
" item = mx.symbol.Variable('item')\n",
" score = mx.symbol.Variable('score')\n",
" # user latent features\n",
" user = mx.symbol.Embedding(data = user, input_dim = max_user, output_dim = k)\n",
" user = mx.symbol.Activation(data = user, act_type='relu')\n",
" user = mx.symbol.FullyConnected(data = user, num_hidden = hidden)\n",
" # item latent features\n",
" item = mx.symbol.Embedding(data = item, input_dim = max_item, output_dim = k)\n",
" item = mx.symbol.Activation(data = item, act_type='relu')\n",
" item = mx.symbol.FullyConnected(data = item, num_hidden = hidden)\n",
" # predict by the inner product\n",
" pred = user * item\n",
" pred = mx.symbol.sum_axis(data = pred, axis = 1)\n",
" pred = mx.symbol.Flatten(data = pred)\n",
" # loss layer\n",
" pred = mx.symbol.LinearRegressionOutput(data = pred, label = score)\n",
" return pred\n",
"\n",
"net2 = get_one_layer_mlp(64, 64)\n",
"mx.viz.plot_network(net2)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"scrolled": false
},
"outputs": [],
"source": [
"results2 = train(net2, train_test_data, num_epoch=20, learning_rate=0.001, optimizer='adam', ctx=ctx)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deep Neural Network (Residual Network / ResNet)\n",
"Borrowing ideas from [Deep Residual Learning for Image Recognition (He, et al.)](https://arxiv.org/abs/1512.03385) to build a complex deep network that is aggressively regularized to avoid over-fitting, but still achieves good performance. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"scrolled": false
},
"outputs": [],
"source": [
"def get_multi_layer_dropout_resnet(hidden, k):\n",
" # input\n",
" user = mx.symbol.Variable('user')\n",
" item = mx.symbol.Variable('item')\n",
" score = mx.symbol.Variable('score')\n",
" # user latent features\n",
" user1 = mx.symbol.Embedding(data = user, input_dim = max_user, output_dim = k)\n",
" user = mx.symbol.FullyConnected(data = user1, num_hidden = hidden)\n",
" user = mx.symbol.Activation(data = user, act_type='relu')\n",
" user = mx.symbol.Dropout(data=user, p=0.5)\n",
" user = mx.symbol.FullyConnected(data = user, num_hidden = hidden)\n",
" user2 = user + user1\n",
" user2 = mx.symbol.Dropout(data=user2, p=0.5)\n",
" user = mx.symbol.FullyConnected(data = user2, num_hidden = hidden)\n",
" user = mx.symbol.Activation(data = user, act_type='relu')\n",
" user = mx.symbol.Dropout(data=user, p=0.5)\n",
" user = mx.symbol.FullyConnected(data = user, num_hidden = hidden)\n",
" user = user + user2\n",
" # item latent features\n",
" item1 = mx.symbol.Embedding(data = item, input_dim = max_item, output_dim = k)\n",
" item = mx.symbol.FullyConnected(data = item1, num_hidden = hidden)\n",
" item = mx.symbol.Activation(data = item, act_type='relu')\n",
" item = mx.symbol.Dropout(data=item, p=0.5) \n",
" item = mx.symbol.FullyConnected(data=item, num_hidden = hidden)\n",
" item2 = item + item1\n",
" item2 = mx.symbol.Dropout(data=item2, p=0.5) \n",
" item = mx.symbol.FullyConnected(data = item2, num_hidden = hidden)\n",
" item = mx.symbol.Activation(data = item, act_type='relu')\n",
" item = mx.symbol.Dropout(data=item, p=0.5) \n",
" item = mx.symbol.FullyConnected(data=item, num_hidden = hidden)\n",
" item = item + item2\n",
" # predict by the inner product\n",
" pred = user * item\n",
" pred = mx.symbol.sum_axis(data = pred, axis = 1)\n",
" pred = mx.symbol.Flatten(data = pred)\n",
" # loss layer\n",
" pred = mx.symbol.LinearRegressionOutput(data = pred, label = score)\n",
" return pred\n",
"net3 = get_multi_layer_dropout_resnet(64, 64)\n",
"mx.viz.plot_network(net3)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Larger batch size makes GPU more efficient for this complex model\n",
"train_test_data2 = get_data_iter(batch_size=200) \n",
"results3 = train(net3, train_test_data2, num_epoch=25, learning_rate=0.001, optimizer='adam', ctx=ctx)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Visualizing results\n",
"Compare accuracy and training time across the models."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import bokeh\n",
"import bokeh.io\n",
"import bokeh.plotting\n",
"bokeh.io.output_notebook()\n",
"import pandas as pd\n",
"\n",
"def viz_lines(fig, results, legend, color):\n",
" df = pd.DataFrame(results._data['eval'])\n",
" fig.line(df.elapsed,df.RMSE, color=color, legend=legend, line_width=2)\n",
" df = pd.DataFrame(results._data['train'])\n",
" fig.line(df.elapsed,df.RMSE, color=color, line_dash='dotted', alpha=0.1)\n",
"\n",
"fig = bokeh.plotting.Figure(x_axis_type='datetime', x_axis_label='Training time', y_axis_label='RMSE')\n",
"viz_lines(fig, results1, \"Linear MF\", \"orange\")\n",
"viz_lines(fig, results2, \"MLP\", \"blue\")\n",
"viz_lines(fig, results3, \"ResNet\", \"red\")\n",
"\n",
"bokeh.io.show(fig)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": []
}
],
"metadata": {
"anaconda-cloud": {},
"kernelspec": {
"display_name": "Python [Root]",
"language": "python",
"name": "Python [Root]"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.12"
}
},
"nbformat": 4,
"nbformat_minor": 1
}