com.intel.analytics.zoo.pipeline.api.keras.models
Add a sub-module to the sequential container.
Add a sub-module to the sequential container.
The module to be added.
This sequential container.
Clear gradient clipping parameters.
Clear gradient clipping parameters. In this case, gradient clipping will not be applied. In order to take effect, it needs to be called before fit.
You can also use custom loss function during compile.
You can also use custom loss function during compile.
Alternatively, one can pass in the corresponding Keras-Style string representations when calling compile.
Alternatively, one can pass in the corresponding Keras-Style string representations when calling compile.
For example: optimizer = "sgd", loss = "mse", metrics = List("accuracy")
Configure the learning process.
Configure the learning process. It MUST be called before fit or evaluate.
Optimization method to be used.
Criterion to be used.
Validation method(s) to be used. Default is null if no validation is needed.
Evaluate a model on TextSet.
Evaluate a model on TextSet.
Evaluation dataset, TextSet.
Number of samples per batch.
Evaluate a model on ImageSet.
Evaluate a model on ImageSet.
Evaluation dataset, ImageSet.
Number of samples per batch.
Evaluate a model in local mode.
Evaluate a model on given RDD.
Evaluate a model on given RDD.
Evaluation dataset, RDD of Sample.
Number of samples per batch.
Train a model for a fixed number of epochs on TextSet.
Train a model for a fixed number of epochs on TextSet.
Training dataset, TextSet.
Number of samples per gradient update.
Number of epochs to train.
TextSet, or null if validation is not configured.
Train a model for a fixed number of epochs on ImageSet.
Train a model for a fixed number of epochs on ImageSet.
Training dataset, ImageSet.
Number of samples per gradient update.
Number of epochs to train.
ImageSet, or null if validation is not configured.
Train a model for a fixed number of epochs on Sample RDD.
Train a model for a fixed number of epochs on Sample RDD.
Training dataset, RDD of Sample.
Number of samples per gradient update. Default is 32.
Number of epochs to train. Default is 10.
RDD of Sample, or null if validation is not configured. Default is null.
Train a model for a fixed number of epochs on a DataSet.
Train a model for a fixed number of epochs on a DataSet.
Training dataset. If x is an instance of LocalDataSet, train in local mode.
Number of epochs to train.
Dataset for validation, or null if validation is not configured.
Build graph: some other modules point to current module
Build graph: some other modules point to current module
upstream variables
Variable containing current module
To get the scalar like "Loss", "LearningRate" from train summary Return is a Array of 3-tuples
To get the scalar like "Loss", "LearningRate" from train summary Return is a Array of 3-tuples
The string variable represents the parameter you want to return supported tags are "LearningRate", "Loss", "Throughput"
To get the scalar like "Loss", "Top1Accuracy" from validation summary Return is a Array of 3-tuples
To get the scalar like "Loss", "Top1Accuracy" from validation summary Return is a Array of 3-tuples
The string variable represents the parameter you want to return supported tags are 'AUC', 'Accuracy', 'BinaryAccuracy', 'CategoricalAccuracy', 'HitRatio', 'Loss', 'MAE', 'NDCG', 'SparseCategoricalAccuracy', 'TFValidationMethod', 'Top1Accuracy', 'Top5Accuracy', 'TreeNNAccuracy'.
The default batchPerThread is 4.
The default batchPerThread is 4. For DistributedTextSet, the total batchSize is batchPerThread * rdd.getNumPartitions. For LocalTextSet, the total batchSize is batchPerThread * numOfCores.
Prediction data, TextSet.
Use a model to do prediction on TextSet.
Use a model to do prediction on TextSet.
Prediction data, TextSet.
The total batch size is batchPerThread * rdd.getNumPartitions(distributed mode) or batchPerThread * numOfCores(local mode)
The default batchPerThread is 4.
The default batchPerThread is 4. For DistributedImageSet, the total batchSize is batchPerThread * rdd.getNumPartitions. For LocalImageSet, the total batchSize is batchPerThread * numOfCores.
Prediction data, ImageSet.
Use a model to do prediction on ImageSet.
Use a model to do prediction on ImageSet.
Prediction data, ImageSet.
The total batch size is batchPerThread * rdd.getNumPartitions(distributed mode) or batchPerThread * numOfCores(local mode)
Use a model to do prediction in local mode.
Use a model to do prediction in local mode. The total batch size is batchPerThread * numOfCores, and batchPerThread is 4 by default.
Prediction data, array of Sample.
Use a model to do prediction in local mode.
Use a model to do prediction in local mode.
Prediction data, array of Sample.
The total batchSize is batchPerThread * numOfCores.
Use a model to do prediction in local mode.
Use a model to do prediction in local mode. The total batch size is batchPerThread * numOfCores, and batchPerThread is 4 by default.
Prediction data, LocalDataSet.
Use a model to do prediction in local mode.
Use a model to do prediction in local mode.
Prediction data, LocalDataSet.
The total batchSize is batchPerThread * numOfCores.
Use a model to do prediction for RDD.
Use a model to do prediction for RDD. The default batchPerThread is 4, and the total batchSize is batchPerThread * rdd.getNumPartitions.
Prediction data, RDD of Sample.
Use a model to do prediction for RDD.
Use a model to do prediction for RDD.
Prediction data, RDD of Sample.
The total batchSize is batchPerThread * rdd.getNumPartitions.
Use a model to predict for classes.
Use a model to predict for classes. By default, label predictions start from 0.
Prediction data, RDD of Sample.
The default batchPerThread is 4, and the total batchSize is batchPerThread * rdd.getNumPartitions.
Boolean. Whether result labels start from 0. Default is true. If false, result labels start from 1.
Release DataSet from memory.
Release DataSet from memory. This method is used to release the rdd which is cached when toDataSet() method is called and rdd is cached TODO: modify this when BigDL fix this issue
Configure checkpoint settings to write snapshots every epoch during the training process.
Configure checkpoint settings to write snapshots every epoch during the training process. In order to take effect, it needs to be called before fit.
The path to save snapshots. Make sure this path exists beforehand.
Whether to overwrite existing snapshots in the given path. Default is true.
Set constant gradient clipping during the training process.
Set constant gradient clipping during the training process. In order to take effect, it needs to be called before fit.
The minimum value to clip by. Double.
The maximum value to clip by. Double.
Set the model to be in evaluate status, i.e.
Set the model to be in evaluate status, i.e. remove the effect of Dropout, etc.
Clip gradient to a maximum L2-Norm during the training process.
Clip gradient to a maximum L2-Norm during the training process. In order to take effect, it needs to be called before fit.
Gradient L2-Norm threshold. Double.
Set summary information during the training process for visualization purposes.
Set summary information during the training process for visualization purposes. Saved summary can be viewed via TensorBoard. In order to take effect, it needs to be called before fit.
Training summary will be saved to 'logDir/appName/train' and validation summary (if any) will be saved to 'logDir/appName/validation'.
The base directory path to store training and validation logs.
The name of the application.
Print out the summary information of an Analytics Zoo Keras Model.
Print out the summary information of an Analytics Zoo Keras Model.
For each layer in the model, there will be a separate row containing four columns: Layer (type) Output Shape Param # Connected to
In addition, total number of parameters of this model, separated into trainable and non-trainable counts, will be printed out after the table.
The total length of one row. Default is 120.
The maximum absolute length proportion(%) of each field. Array of Double of length 4. Usually you don't need to adjust this parameter. Default is Array(.33, .55, .67, 1), meaning that the first field will occupy up to 33% of lineLength, the second field will occupy up to (55-33)% of lineLength, the third field will occupy up to (67-55)% of lineLength, the fourth field will occupy the remaining line (100-67)%. If the field has a larger length, the remaining part will be trimmed. If the field has a smaller length, the remaining part will be white spaces.
(Since version 0.3.0) please use recommended saveModule(path, overWrite)