GraphAttention

lamp.nn.graph.GraphAttention$
See theGraphAttention companion class

Attributes

Companion
class
Graph
Supertypes
trait Product
trait Mirror
class Object
trait Matchable
class Any
Self type

Members list

Type members

Classlikes

case object Weights extends LeafTag

Attributes

Supertypes
trait Singleton
trait Product
trait Mirror
trait Serializable
trait Product
trait Equals
trait LeafTag
trait PTag
class Object
trait Matchable
class Any
Show all
Self type
Weights.type

Inherited types

type MirroredElemLabels <: Tuple

The names of the product elements

The names of the product elements

Attributes

Inherited from:
Mirror
type MirroredLabel <: String

The name of the type

The name of the type

Attributes

Inherited from:
Mirror

Value members

Concrete methods

def apply[S : Sc](nodeDim: Int, edgeDim: Int, attentionKeyHiddenDimPerHead: Int, attentionNumHeads: Int, valueDimPerHead: Int, dropout: Double, tOpt: STenOptions, dotProductAttention: Boolean, nonLinearity: Boolean): GraphAttention
def multiheadGraphAttention[S : Sc](nodeFeatures: Variable, edgeFeatures: Variable, edgeI: STen, edgeJ: STen, wNodeKey1: Variable, wNodeKey2: Variable, wEdgeKey: Variable, wNodeValue: Variable, wAttention: Option[Variable], numHeads: Int): Variable

Graph Attention Network https://arxiv.org/pdf/1710.10903.pdf Non-linearity in eq 4 and dropout is not applied to the final vertex activations

Graph Attention Network https://arxiv.org/pdf/1710.10903.pdf Non-linearity in eq 4 and dropout is not applied to the final vertex activations

Needs self edges to be already present in the graph

Attributes

Returns

next node representation (without relu, dropout) and a tensor with the original node and edge features ligned up like [N_i, N_j, E_ij]

Implicits

Implicits

implicit val load: Load[GraphAttention]
implicit val tr: TrainingMode[GraphAttention]