Concepedia

TLDR

Graph representation learning typically aggregates information from neighboring nodes, and the effective neighborhood size depends on graph structure, similar to a random walk spread. The study analyzes key properties of neighborhood‑aggregation models and proposes jumping knowledge networks to adaptively leverage varying neighborhood ranges for improved structure‑aware representations. JK networks dynamically select appropriate neighborhood ranges for each node, combining representations from multiple aggregation depths to capture local structure. Experiments on social, bioinformatics, and citation networks show that JK networks achieve state‑of‑the‑art results and consistently improve existing GCN, GraphSAGE, and GAT models.

Abstract

Recent deep learning approaches for representation learning on graphs follow a neighborhood aggregation procedure. We analyze some important properties of these models, and propose a strategy to overcome those. In particular, the range of neighboring nodes that a node's representation draws from strongly depends on the graph structure, analogous to the spread of a random walk. To adapt to local neighborhood properties and tasks, we explore an architecture -- jumping knowledge (JK) networks -- that flexibly leverages, for each node, different neighborhood ranges to enable better structure-aware representation. In a number of experiments on social, bioinformatics and citation networks, we demonstrate that our model achieves state-of-the-art performance. Furthermore, combining the JK framework with models like Graph Convolutional Networks, GraphSAGE and Graph Attention Networks consistently improves those models' performance.

References

YearCitations

Page 1