Learning the Graphical Structure of Electronic Health Records with Graph Convolutional Transformer
Thursday May 21 2020 16:00 GMT
Please to join the live chat.
Why This Is Interesting
Electronic health records (EHR) are a connected data structure that can be modelled as a graphical structure. Research has shown that using graphical EHR is superior on predictive tasks than simply assuming no data connectivity. However, EHR data doesn’t always contain structural information making it difficult to actually create graphical EHR. The authors propose the Graph Convolutional Transformer (GCT), a novel approach to jointly learn the hidden structure while performing various prediction tasks when the structure information is unavailable. The proposed model consistently outperformed previous
approaches empirically, on both synthetic data and publicly available EHR data, for various prediction tasks such as graph
reconstruction and readmission prediction, indicating that it can serve as an effective general-purpose representation learning algorithm for EHR data.
Discussion Points
the importance of structure in eHR data
self attention via transformers
the use of KL divergence as a regularizer
training issues with regards to sparsity
Takeaways
Structure in EHR data is important can be modelled as a graph data structure for any downstream tasks
Structural information is not always provided but can be learned jointly with the prediction tasks
Transformers can be used to learn the adjacency matrix which are interpreted as attention coefficients
Instead of starting from a dense adjacency matrix, domain knowledge can be used to incorporate priors into the adjacency matrix
To ensure stability in the learning coefficients, the KL divergence can be used to prevent the attention coefficients from jumping sporatically inbetween GCT layers