blob: 0bfca3715c74f8639eabc653a93c1c5c683629c6 [file] [log] [blame]
``mx.nd.nag.mom.update``
================================================
Description
----------------------
Update function for Nesterov Accelerated Gradient( NAG) optimizer.
It updates the weights using the following formula,
.. math::
v_t = \gamma v_{t-1} + \eta * \nabla J(W_{t-1} - \gamma v_{t-1})\\
W_t = W_{t-1} - v_t
Where
:math:`\eta` is the learning rate of the optimizer
:math:`\gamma` is the decay rate of the momentum estimate
:math:`\v_t` is the update vector at time step `t`
:math:`\W_t` is the weight vector at time step `t`
Arguments
------------------
+----------------------------------------+------------------------------------------------------------+
| Argument | Description |
+========================================+============================================================+
| ``weight`` | NDArray-or-Symbol. |
| | |
| | Weight |
+----------------------------------------+------------------------------------------------------------+
| ``grad`` | NDArray-or-Symbol. |
| | |
| | Gradient |
+----------------------------------------+------------------------------------------------------------+
| ``mom`` | NDArray-or-Symbol. |
| | |
| | Momentum |
+----------------------------------------+------------------------------------------------------------+
| ``lr`` | float, required. |
| | |
| | Learning rate |
+----------------------------------------+------------------------------------------------------------+
| ``momentum`` | float, optional, default=0. |
| | |
| | The decay rate of momentum estimates at each epoch. |
+----------------------------------------+------------------------------------------------------------+
| ``wd`` | float, optional, default=0. |
| | |
| | Weight decay augments the objective function with a |
| | regularization term that penalizes large weights. The |
| | penalty scales with the square of the magnitude of each |
| | weight. |
+----------------------------------------+------------------------------------------------------------+
| ``rescale.grad`` | float, optional, default=1. |
| | |
| | Rescale gradient to grad = rescale_grad*grad. |
+----------------------------------------+------------------------------------------------------------+
| ``clip.gradient`` | float, optional, default=-1. |
| | |
| | Clip gradient to the range of [-clip_gradient, |
| | clip_gradient] If clip_gradient <= 0, gradient clipping is |
| | turned off. grad = max(min(grad, clip_gradient), |
| | -clip_gradient). |
+----------------------------------------+------------------------------------------------------------+
Value
----------
``out`` The result mx.ndarray
Link to Source Code: http://github.com/apache/incubator-mxnet/blob/1.6.0/src/operator/optimizer_op.cc#L726