Concepedia

Publication | Open Access

Network Embedding as Matrix Factorization

635

Citations

36

References

2018

Year

TLDR

Since the invention of word2vec, the skip‑gram model has significantly advanced network embedding research, giving rise to DeepWalk, LINE, PTE, and node2vec. The study aims to unify all negative‑sampling based network embedding models into a matrix‑factorization framework. We demonstrate this unification by deriving closed‑form matrix‑factorization expressions for each model. We prove that DeepWalk, LINE, PTE, and node2vec are all special cases of matrix factorization of graph Laplacians, establish their theoretical links to graph Laplacian theory, and show that our NetMF method outperforms DeepWalk and LINE on standard network mining tasks, thereby providing a unified theoretical foundation for skip‑gram based network embeddings.

Abstract

Since the invention of word2vec, the skip-gram model has significantly advanced the research of network embedding, such as the recent emergence of the DeepWalk, LINE, PTE, and node2vec approaches. In this work, we show that all of the aforementioned models with negative sampling can be unified into the matrix factorization framework with closed forms. Our analysis and proofs reveal that: (1) DeepWalk empirically produces a low-rank transformation of a network's normalized Laplacian matrix; (2) LINE, in theory, is a special case of DeepWalk when the size of vertices' context is set to one; (3) As an extension of LINE, PTE can be viewed as the joint factorization of multiple networks' Laplacians; (4) node2vec is factorizing a matrix related to the stationary distribution and transition probability tensor of a 2nd-order random walk. We further provide the theoretical connections between skip-gram based network embedding algorithms and the theory of graph Laplacian. Finally, we present the NetMF method as well as its approximation algorithm for computing network embedding. Our method offers significant improvements over DeepWalk and LINE for conventional network mining tasks. This work lays the theoretical foundation for skip-gram based network embedding methods, leading to a better understanding of latent network representation learning.

References

YearCitations

Page 1