Past Recording
ShareSave
BERTology Meets Biology: Interpreting Attention in Protein Language Models
Wednesday Oct 21 2020 16:00 GMT
Please to join the live chat.
BERTology Meets Biology: Interpreting Attention in Protein Language Models
Why This Is Interesting

Transformer architectures have proven to learn useful representations for protein classification and generation tasks. However, these representations present challenges in interpretability. Through the lens of attention, we analyze the inner workings of the Transformer and explore how the model discerns structural and functional properties of proteins. We show that attention (1) captures the folding structure of proteins, connecting amino acids that are far apart in the underlying sequence, but spatially close in the three-dimensional structure, (2) targets binding sites, a key functional component of proteins, and (3) focuses on progressively more complex biophysical properties with increasing layer depth. We also present a three-dimensional visualization of the interaction between attention and protein structure. Our findings align with known biological processes and provide a tool to aid discovery in protein engineering and synthetic biology.

Takeaways

This paper builds on the synergy between NLP and computational biology by adapting and extending NLP interpretability methods to protein sequence modeling. We show how a Transformer language model recovers structural and functional properties of proteins and integrates this knowledge directly into its attention mechanism. Our analysis reveals that attention not only captures properties of individual amino acids, but also discerns more global properties such as binding sites and tertiary structure. In some cases, attention also provides a well-calibrated measure of confidence in the model’s predictions of these properties. We hope that machine learning practitioners can apply these insights in designing the next generation of protein sequence models.

Time of Recording: Wednesday Oct 21 2020 16:00 GMT