Time: Thursday 25-Jun-2020 16:00 (This is a past event.)
Motivation / Abstract
Graph neural networks (GNNs) are a class of deep models that operate on data with arbitrary topology represented as graphs. The authors propose an efficient memory layer for GNNs that can jointly learn node representations and but moreover, coarsen the graph representations which methods like GraphSAGE or GCN don't necessarily do. The authors introduce two new networks based on this layer: memory-based GNN (MemGNN) and graph memory network (GMN) that can learn hierarchical graph representations. The experimental results show that the proposed models achieve state-of-the-art results in eight out of nine graph classification and regression benchmarks. We also show that the learned representations could correspond to chemical features in the molecule data.
- Relationship between GMN/MemGNNs and transformers - Clustering loss function - Interpreting the centroids learned - Global topological embedding versus local topologies
- Message passing is not actually required! Different from DiffPool, Mem-GNNs don't use graph structure or message passing in every layer and found instead that a global representation of the graph in the first layer is sufficient for several tasks - Memory layers force nodes to fit into a feature space - the way these clusters are distributed is like a hyperparameter for a user to set which can determine the feature space - In this work, the student-t distribution was selected as the prior because of its performance and close connection with other clustering works like t-SNE - This work is solely on graph labelling tasks but can be adapted for node labelling tasks - As an add-on to existing networks, memory layers can be used to refine representations that have undergone message passing by enforcing a clustering distribution in the feature space