Google has come out with a state-of-the-art abstractive summarization model called PEGASUS. PEGASUS relies on a novel pre-training objective that is more similar to the downstream task. The authors report state-of-the-art results with impressive sample efficiency.
Effect of different LM pre-training objectives on downstream tasks. Sample efficiency of this model Strategies for selecting pre-training objectives Evidence of lack thereof of symbolic reasoning happening in generated sentences. Effect of tokenization strategies on model performance.