Viet-Anh on Software Logo

What is: Graph Transformer?

SourceA Generalization of Transformer Networks to Graphs
Year2000
Data SourceCC BY-SA - https://paperswithcode.com

This is Graph Transformer method, proposed as a generalization of Transformer Neural Network architectures, for arbitrary graphs.

Compared to the original Transformer, the highlights of the presented architecture are:

  • The attention mechanism is a function of neighborhood connectivity for each node in the graph.
  • The position encoding is represented by Laplacian eigenvectors, which naturally generalize the sinusoidal positional encodings often used in NLP.
  • The layer normalization is replaced by a batch normalization layer.
  • The architecture is extended to have edge representation, which can be critical to tasks with rich information on the edges, or pairwise interactions (such as bond types in molecules, or relationship type in KGs. etc).