Publication | Closed Access
Entropy Neural Estimation for Graph Contrastive Learning
11
Citations
13
References
2023
Year
Unknown Venue
Contrastive learning on graphs aims at extracting distinguishable high-level representations of nodes. We theoretically illustrate that the entropy of a dataset is approximated by maximizing the lower bound of the mutual information across different views of a graph, i.e., entropy is estimated by a neural network. Based on this finding, we propose a simple yet effective subset sampling strategy to contrast pairwise representations between views of a dataset. In particular, we randomly sample nodes and edges from a given graph to build the input subset for a view. Two views are fed into a parameter-shared Siamese network to extract the high-dimensional embeddings and estimate the information entropy of the entire graph. For the learning process, we propose to optimize the network using two objectives, simultaneously. Concretely, the input of the contrastive loss consists of positive and negative pairs. Our selection strategy of pairs is different from previous works and we present a novel strategy to enhance the representation ability by selecting nodes based on cross-view similarities. We enrich the diversity of the positive and negative pairs by selecting highly similar samples and totally different data with the guidance of cross-view similarity scores, respectively. We also introduce a cross-view consistency constraint on the representations generated from the different views. We conduct experiments on seven graph benchmarks, and the proposed approach achieves competitive performance compared to the current state-of-the-art methods. The source code is available at https://github.com/kunzhan/M-ILBO.
| Year | Citations | |
|---|---|---|
Page 1
Page 1