Package org.deeplearning4j.optimize.api
Interface ConvexOptimizer
-
- All Superinterfaces:
Serializable
- All Known Implementing Classes:
BaseOptimizer
,ConjugateGradient
,LBFGS
,LineGradientDescent
,StochasticGradientDescent
public interface ConvexOptimizer extends Serializable
-
-
Method Summary
All Methods Instance Methods Abstract Methods Modifier and Type Method Description int
batchSize()
The batch size for the optimizerComputationGraphUpdater
getComputationGraphUpdater()
ComputationGraphUpdater
getComputationGraphUpdater(boolean initializeIfReq)
NeuralNetConfiguration
getConf()
GradientsAccumulator
getGradientsAccumulator()
This method returns GradientsAccumulator instance used in this optimizer.StepFunction
getStepFunction()
This method returns StepFunction defined within this Optimizer instanceUpdater
getUpdater()
Updater
getUpdater(boolean initializeIfReq)
Pair<Gradient,Double>
gradientAndScore(LayerWorkspaceMgr workspaceMgr)
The gradient and score for this optimizerboolean
optimize(LayerWorkspaceMgr workspaceMgr)
Calls optimizevoid
postStep(INDArray line)
After the step has been made, do an actionvoid
preProcessLine()
Pre preProcess a line before an iterationdouble
score()
The score for the optimizer so farvoid
setBatchSize(int batchSize)
Set the batch size for the optimizervoid
setGradientsAccumulator(GradientsAccumulator accumulator)
This method specifies GradientsAccumulator instance to be used for updates sharing across multiple modelsvoid
setListeners(Collection<TrainingListener> listeners)
void
setUpdater(Updater updater)
void
setUpdaterComputationGraph(ComputationGraphUpdater updater)
void
setupSearchState(Pair<Gradient,Double> pair)
Based on the gradient and score setup a search statevoid
updateGradientAccordingToParams(Gradient gradient, Model model, int batchSize, LayerWorkspaceMgr workspaceMgr)
Update the gradient according to the configuration such as adagrad, momentum, and sparsity
-
-
-
Method Detail
-
score
double score()
The score for the optimizer so far- Returns:
- the score for this optimizer so far
-
getUpdater
Updater getUpdater()
-
getUpdater
Updater getUpdater(boolean initializeIfReq)
-
getComputationGraphUpdater
ComputationGraphUpdater getComputationGraphUpdater()
-
getComputationGraphUpdater
ComputationGraphUpdater getComputationGraphUpdater(boolean initializeIfReq)
-
setUpdater
void setUpdater(Updater updater)
-
setUpdaterComputationGraph
void setUpdaterComputationGraph(ComputationGraphUpdater updater)
-
setListeners
void setListeners(Collection<TrainingListener> listeners)
-
setGradientsAccumulator
void setGradientsAccumulator(GradientsAccumulator accumulator)
This method specifies GradientsAccumulator instance to be used for updates sharing across multiple models- Parameters:
accumulator
-
-
getStepFunction
StepFunction getStepFunction()
This method returns StepFunction defined within this Optimizer instance- Returns:
-
getGradientsAccumulator
GradientsAccumulator getGradientsAccumulator()
This method returns GradientsAccumulator instance used in this optimizer. This method can return null.- Returns:
-
getConf
NeuralNetConfiguration getConf()
-
gradientAndScore
Pair<Gradient,Double> gradientAndScore(LayerWorkspaceMgr workspaceMgr)
The gradient and score for this optimizer- Returns:
- the gradient and score for this optimizer
-
optimize
boolean optimize(LayerWorkspaceMgr workspaceMgr)
Calls optimize- Returns:
- whether the convex optimizer converted or not
-
batchSize
int batchSize()
The batch size for the optimizer- Returns:
-
setBatchSize
void setBatchSize(int batchSize)
Set the batch size for the optimizer- Parameters:
batchSize
-
-
preProcessLine
void preProcessLine()
Pre preProcess a line before an iteration
-
postStep
void postStep(INDArray line)
After the step has been made, do an action- Parameters:
line
-
-
setupSearchState
void setupSearchState(Pair<Gradient,Double> pair)
Based on the gradient and score setup a search state- Parameters:
pair
- the gradient and score
-
updateGradientAccordingToParams
void updateGradientAccordingToParams(Gradient gradient, Model model, int batchSize, LayerWorkspaceMgr workspaceMgr)
Update the gradient according to the configuration such as adagrad, momentum, and sparsity- Parameters:
gradient
- the gradient to modifymodel
- the model with the parameters to updatebatchSize
- batchSize for update
-
-