Viet-Anh on Software Logo

What is: Message Passing Neural Network?

SourceNeural Message Passing for Quantum Chemistry
Year2000
Data SourceCC BY-SA - https://paperswithcode.com

There are at least eight notable examples of models from the literature that can be described using the Message Passing Neural Networks (MPNN) framework. For simplicity we describe MPNNs which operate on undirected graphs GG with node features xvx_{v} and edge features evwe_{vw}. It is trivial to extend the formalism to directed multigraphs. The forward pass has two phases, a message passing phase and a readout phase. The message passing phase runs for TT time steps and is defined in terms of message functions MtM_{t} and vertex update functions UtU_{t}. During the message passing phase, hidden states hvth_{v}^{t} at each node in the graph are updated based on messages mvt+1m_{v}^{t+1} according to

mvt+1=wN(v)Mt(hvt,hwt,evw)m_{v}^{t+1} = \sum_{w \in N(v)} M_{t}(h_{v}^{t}, h_{w}^{t}, e_{vw})
hvt+1=Ut(hvt,mvt+1)h_{v}^{t+1} = U_{t}(h_{v}^{t}, m_{v}^{t+1})

where in the sum, N(v)N(v) denotes the neighbors of vv in graph GG. The readout phase computes a feature vector for the whole graph using some readout function RR according to

y^=R(hvTvG)\hat{y} = R(\\{ h_{v}^{T} | v \in G \\})

The message functions MtM_{t}, vertex update functions UtU_{t}, and readout function RR are all learned differentiable functions. RR operates on the set of node states and must be invariant to permutations of the node states in order for the MPNN to be invariant to graph isomorphism.