public abstract class BasePretrainNetwork extends FeedForwardLayer
| Modifier and Type | Class and Description |
|---|---|
static class |
BasePretrainNetwork.Builder<T extends BasePretrainNetwork.Builder<T>> |
| Modifier and Type | Field and Description |
|---|---|
protected org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction |
lossFunction |
protected double |
visibleBiasInit |
nIn, nOutactivationFn, biasInit, biasUpdater, dist, gradientNormalization, gradientNormalizationThreshold, iUpdater, l1, l1Bias, l2, l2Bias, weightInit, weightNoiseconstraints, iDropout, layerName| Constructor and Description |
|---|
BasePretrainNetwork(BasePretrainNetwork.Builder builder) |
| Modifier and Type | Method and Description |
|---|---|
double |
getL1ByParam(String paramName)
Get the L1 coefficient for the given parameter.
|
double |
getL2ByParam(String paramName)
Get the L2 coefficient for the given parameter.
|
boolean |
isPretrainParam(String paramName)
Is the specified parameter a layerwise pretraining only parameter?
For example, visible bias params in an autoencoder (or, decoder params in a variational autoencoder) aren't used during supervised backprop. Layers (like DenseLayer, etc) with no pretrainable parameters will return false for all (valid) inputs. |
getOutputType, getPreProcessorForInputType, setNInclone, getUpdaterByParam, resetLayerDefaultConfiggetMemoryReport, initializeConstraints, initializer, instantiateprotected org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction lossFunction
protected double visibleBiasInit
public BasePretrainNetwork(BasePretrainNetwork.Builder builder)
public double getL1ByParam(String paramName)
LayergetL1ByParam in class FeedForwardLayerparamName - Parameter namepublic double getL2ByParam(String paramName)
LayergetL2ByParam in class FeedForwardLayerparamName - Parameter namepublic boolean isPretrainParam(String paramName)
LayerisPretrainParam in class FeedForwardLayerparamName - Parameter name/keyCopyright © 2018. All rights reserved.