Graph neural networks (GNNs) are a class of deep models that operate on data with arbitrary topology represented as graphs. The authors propose an efficient memory layer for GNNs that can jointly learn node representations and but moreover, coarsen the graph representations which methods like GraphSAGE or GCN don’t necessarily do. The authors introduce two new networks based on this layer: memory-based GNN (MemGNN) and graph memory network (GMN) that can learn hierarchical graph representations. The experimental results show that the proposed models achieve state-of-the-art results in eight out of nine graph classification and regression benchmarks. We also show that the learned representations could correspond to chemical features in the molecule data.