lamp-core
lamp-core
API
lamp
autograd
Add
ArcTan
ArgMax
Assign
Autograd
AvgPool2D
BatchNorm
BatchNorm2D
BatchedMatMul
BinaryCrossEntropyWithLogitsLoss
CappedShiftedNegativeExponential
CastToPrecision
Cholesky
CholeskySolve
Concatenate
ConstAdd
ConstMult
Constant
Constant
ConstantWithGrad
ConstantWithoutGrad
Convolution
Cos
Cross
Debug
Diag
Div
Dropout
ElementWiseMaximum
ElementWiseMinimum
Embedding
EqWhere
EuclideanDistance
Exp
Expand
ExpandAs
Flatten
Gelu
GraphMemoryAllocationReport
HardSwish
IndexAdd
IndexAddToTarget
IndexFill
IndexSelect
Inv
LayerNormOp
LeakyRelu
Log
Log1p
LogDet
LogSoftMax
MaskFill
MaskSelect
MatMul
MaxPool1D
MaxPool2D
Mean
Mean
Minus
MseLoss
Mult
NllLoss
NoReduction
Norm2
OneHot
Op
PInv
Pow
PowConst
Reduction
Relu
RepeatInterleave
Reshape
ScaledDotProductAttention
ScatterAdd
Select
Sigmoid
Sin
Slice
SmoothL1Loss
Softplus
SparseFromValueAndIndex
SquaredFrobeniusMatrixNorm
Stack
Sum
Sum
Tan
Tanh
ToDense
Transpose
Variable
Variable
VariableNonConstant
VariableNonConstant
Variance
View
WeightNorm
Where
nn
bert
BertEncoder
BertEncoder
PositionalEmbeddingWeight
BertLoss
BertLoss
BertLossInput
BertLossInput
BertPretrainInput
BertPretrainInput
BertPretrainModule
BertPretrainModule
BertPretrainOutput
MaskedLanguageModelModule
MaskedLanguageModelModule
graph
GCN
GCN
Graph
GraphAttention
GraphAttention
Weights
MPNN
MPNN
VertexPooling
Mean
PoolType
Sum
languagemodel
LanguageModelInput
LanguageModelInput
LanguageModelLoss
LanguageModelLoss
LanguageModelModule
LanguageModelModule
LanguageModelOutput
LanguageModelOutputNonVariable
LanguageModelOutputNonVariable
LossInput
LossInput
AdamW
AdamW
AdversarialTraining
BatchNorm
BatchNorm
Bias
RunningMean
RunningVar
Weights
BatchNorm2D
BatchNorm2D
Bias
Weights
Conv1D
Conv1D
Bias
Weights
Conv2D
Conv2D
Bias
Weights
Conv2DTransposed
Conv2DTransposed
Bias
Weights
Debug
Debug
DependentHyperparameter
Dropout
Dropout
EitherModule
EitherModule
Tag
Embedding
Embedding
Weights
FreeRunningRNN
FreeRunningRNN
Fun
Fun
GRU
GRU
BiasH
BiasR
BiasZ
WeightHh
WeightHr
WeightHz
WeightXh
WeightXr
WeightXz
GenericFun
GenericFun
GenericModule
GenericModule
InitState
InitState
InitStateSyntax
LSTM
LSTM
BiasC
BiasF
BiasI
BiasO
WeightHc
WeightHf
WeightHi
WeightHo
WeightXc
WeightXf
WeightXi
WeightXo
LayerNorm
LayerNorm
Bias
Scale
LeafTag
LearningRateSchedule
LearningRateSchedule
ReduceLROnPlateauState
LiftedModule
LiftedModule
Linear
Linear
Bias
Weights
Load
Load
LoadSyntax
LossCalculation
LossFunction
LossFunctions
BCEWithLogits
Identity
MSE
NLL
SequenceNLL
SmoothL1Loss
MLP
ActivationFunction
Gelu
HardSwish
NormType
BatchNorm
LayerNorm
NoNorm
NormType
Relu
Sigmoid
Swish1
MappedState
MappedState
ModelWithOptimizer
MultiheadAttention
MultiheadAttention
WeightsK
WeightsO
WeightsQ
WeightsV
NoTag
Optimizer
OptimizerHyperparameter
PTag
PTag
PerturbedLossCalculation
PositionalEmbedding
RAdam
RAdam
RNN
RNN
BiasH
WeightHh
WeightXh
Recursive
Recursive
ResidualModule
ResidualModule
SGDW
SGDW
Seq2
Seq2
Seq2Seq
Seq2Seq
Seq3
Seq3
Seq4
Seq4
Seq5
Seq5
Seq6
Seq6
SeqLinear
SeqLinear
Bias
Weight
Sequential
Sequential
Tag
Shampoo
Shampoo
SimpleLossCalculation
StatefulSeq2
StatefulSeq2
StatefulSeq3
StatefulSeq3
StatefulSeq4
StatefulSeq4
StatefulSeq5
StatefulSeq5
SupervisedModel
ToLift
ToMappedState
ToUnlift
ToWithInit
TrainingMode
TrainingMode
TrainingModeSyntax
Transformer
Transformer
TransformerDecoder
TransformerDecoder
TransformerDecoderBlock
TransformerDecoderBlock
Bias1
Bias2
Weights1
Weights2
TransformerEmbedding
TransformerEmbedding
Embedding
TransformerEncoder
TransformerEncoder
TransformerEncoderBlock
TransformerEncoderBlock
Bias1
Bias2
Scale1
Scale2
Weights1
Weights2
UnliftedModule
UnliftedModule
WeightNormLinear
WeightNormLinear
Bias
WeightsG
WeightsV
WithInit
WithInit
Yogi
Yogi
sequence
simple
statefulSequence
util
syntax
lamp-core
lamp-core
Members list
Clear all
Packages
package
lamp
In this article
Members list
Packages