Package | Description |
---|---|
org.nd4j.linalg.learning | |
org.nd4j.linalg.learning.config |
Modifier and Type | Class and Description |
---|---|
class |
AdaDeltaUpdater
http://www.matthewzeiler.com/pubs/googleTR2012/googleTR2012.pdf
https://arxiv.org/pdf/1212.5701v1.pdf
|
class |
AdaGradUpdater
Vectorized Learning Rate used per Connection Weight
Adapted from: http://xcorr.net/2014/01/23/adagrad-eliminating-learning-rates-in-stochastic-gradient-descent/
See also http://cs231n.github.io/neural-networks-3/#ada
|
class |
AdaMaxUpdater
The AdaMax updater, a variant of Adam.
|
class |
AdamUpdater
The Adam updater.
|
class |
NadamUpdater
The Nadam updater.
|
class |
NesterovsUpdater
Nesterov's momentum.
|
class |
NoOpUpdater
NoOp updater: gradient updater that makes no changes to the gradient
|
class |
RmsPropUpdater
RMS Prop updates:
|
class |
SgdUpdater
SGD updater applies a learning rate only
|
Modifier and Type | Method and Description |
---|---|
GradientUpdater |
Sgd.instantiate(INDArray viewArray,
boolean initializeViewArray) |
GradientUpdater |
Nadam.instantiate(INDArray viewArray,
boolean initializeViewArray) |
GradientUpdater |
AdaGrad.instantiate(INDArray viewArray,
boolean initializeViewArray) |
GradientUpdater |
NoOp.instantiate(INDArray viewArray,
boolean initializeViewArray) |
GradientUpdater |
Adam.instantiate(INDArray viewArray,
boolean initializeViewArray) |
GradientUpdater |
AdaDelta.instantiate(INDArray viewArray,
boolean initializeViewArray) |
GradientUpdater |
IUpdater.instantiate(INDArray viewArray,
boolean initializeViewArray)
Create a new gradient updater
|
GradientUpdater |
RmsProp.instantiate(INDArray viewArray,
boolean initializeViewArray) |
GradientUpdater |
Nesterovs.instantiate(INDArray viewArray,
boolean initializeViewArray) |
GradientUpdater |
AdaMax.instantiate(INDArray viewArray,
boolean initializeViewArray) |
Copyright © 2018. All rights reserved.