Viet-Anh on Software Logo

What is: Laplacian Positional Encodings?

SourceBenchmarking Graph Neural Networks
Year2000
Data SourceCC BY-SA - https://paperswithcode.com

Laplacian eigenvectors represent a natural generalization of the Transformer positional encodings (PE) for graphs as the eigenvectors of a discrete line (NLP graph) are the cosine and sinusoidal functions. They help encode distance-aware information (i.e., nearby nodes have similar positional features and farther nodes have dissimilar positional features).

Hence, Laplacian Positional Encoding (PE) is a general method to encode node positions in a graph. For each node, its Laplacian PE is the k smallest non-trivial eigenvectors.