Optimizers

Says, you have the parameter W inited for your model and got its gradient stored as (perhaps from AutoGrad APIs). Here is minimal snippet of getting your parameter W baked by SGD.

using MXNet

opt = SGD = 10)
decend! = getupdater(opt)

W = NDArray(Float32[1, 2, 3, 4]);
 = NDArray(Float32[.1, .2, .3, .4]);

decend!(1, ∇, W)
Modules = [MXNet.mx, MXNet.mx.LearningRate, MXNet.mx.Momentum]
Pages = ["optimizer.jl"]

Built-in optimizers

Stochastic Gradient Descent

Modules = [MXNet.mx]
Pages = ["optimizers/sgd.jl"]

ADAM

Modules = [MXNet.mx]
Pages = ["optimizers/adam.jl"]

AdaGrad

Modules = [MXNet.mx]
Pages = ["optimizers/adagrad.jl"]

AdaDelta

Modules = [MXNet.mx]
Pages = ["optimizers/adadelta.jl"]

AdaMax

Modules = [MXNet.mx]
Pages = ["optimizers/adamax.jl"]

RMSProp

Modules = [MXNet.mx]
Pages = ["optimizers/rmsprop.jl"]

Nadam

Modules = [MXNet.mx]
Pages = ["optimizers/nadam.jl"]