Says, you have the parameter W inited for your model and got its gradient stored as ∇ (perhaps from AutoGrad APIs). Here is minimal snippet of getting your parameter W baked by SGD.
using MXNet opt = SGD(η = 10) decend! = getupdater(opt) W = NDArray(Float32[1, 2, 3, 4]); ∇ = NDArray(Float32[.1, .2, .3, .4]); decend!(1, ∇, W)
Modules = [MXNet.mx, MXNet.mx.LearningRate, MXNet.mx.Momentum] Pages = ["optimizer.jl"]
Modules = [MXNet.mx] Pages = ["optimizers/sgd.jl"]
Modules = [MXNet.mx] Pages = ["optimizers/adam.jl"]
Modules = [MXNet.mx] Pages = ["optimizers/adagrad.jl"]
Modules = [MXNet.mx] Pages = ["optimizers/adadelta.jl"]
Modules = [MXNet.mx] Pages = ["optimizers/adamax.jl"]
Modules = [MXNet.mx] Pages = ["optimizers/rmsprop.jl"]
Modules = [MXNet.mx] Pages = ["optimizers/nadam.jl"]