Thomas N. Kipf, Max Welling
We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.
We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient varia...
We consider the problem of classifying nodes (such as documents) in a graph (such as a citation network), where labels a...
In this section, we provide theoretical motivation for a specific graph-based neural network model $f(X, A)$ that we wil...
We consider spectral convolutions on graphs defined as the multiplication of a signal $x \in \mathbb{R}^N$ (a scalar for...
A neural network model based on graph convolutions can therefore be built by stacking multiple convolutional layers of t...
Having introduced a simple, yet flexible model $f(X, A)$ for efficient information propagation on graphs, we can return ...
In the following, we consider a two-layer GCN for semi-supervised node classification on a graph with a symmetric adjace...
In practice, we make use of TensorFlow (Abadi et al., 2015) for an efficient GPU-based implementation of Eq. 9 using spa...
A large number of approaches for semi-supervised learning using graph representations have been proposed in recent years...
Neural networks that operate on graphs have previously been introduced in Gori et al. (2005); Scarselli et al. (2009) as...
We closely follow the experimental setup in Yang et al. (2016). Dataset statistics are summarized in Table 1. In the cit...
Unless otherwise noted, we train a two-layer GCN as described in Section 3.1 and evaluate prediction accuracy on a test ...
We compare against the same baseline methods as in Yang et al. (2016), i.e. label propagation (LP) (Zhu et al., 2003), s...
Results are summarized in Table 2. Reported numbers denote classification accuracy in percent. For ICA, we report the me...
We compare different variants of our proposed per-layer propagation model on the citation network datasets. We follow th...
Here, we report results for the mean training time per epoch (forward pass, cross-entropy calculation, backward pass) fo...
In the experiments demonstrated here, our method for semi-supervised node classification outperforms recent related meth...
Here, we describe several limitations of our current model and outline how these might be overcome in future work.
We have introduced a novel approach for semi-supervised classification on graph-structured data. Our GCN model uses an e...
A neural network model for graph-structured data should ideally be able to learn representations of nodes in a graph, ta...
From the analogy with the Weisfeiler-Lehman algorithm, we can understand that even an untrained GCN model with random we...
On this simple example of a GCN applied to the karate club network it is interesting to observe how embeddings react dur...
In these experiments, we investigate the influence of model depth (number of layers) on classification performance. We r...