AI-Accelerated Product Development
Understanding the Paper: Progressive Transformer-Based Generation of Radiology Reports
Total time needed:
Learn what goes into building a multi-stage transformer model by exploring three fundamental topics: Feature Extraction, Attention Mechanism and Visual Language Models.
Potential Use Cases
Clinical Data Mining, Automated & Reproducible Medical Diagnosis, Coherent Report Generation from Images
Who is This For ?
NLP or Computer Vision Specialists or Enthusiasts
Click on each of the following
to see details.
1. Introduction: Supporting Concepts to Understand Transformer-Based Report Generation
Why does the recipe for "Understanding the Paper: Progressive Transformer-Based Generation of Radiology Reports" have the three supporting concept assets sorted in this order?
2. An Overview of Image Caption Generation Methods
What are the main types of feature extraction methods for images?
How may transformers be leveraged to extract features from images?
3. When Radiology Report Generation Meets Knowledge Graph
How did Noorallahzadeh et al., the authors of our paper of interest "Progressive Transformer-Based Generation of Radiology Reports", construct the training datasets through the usage of MIRQI?
4. Attention Is All You Need
How do Transformers differ from CNNs and RNNs?
5. Transformers in Computer Vision: Farewell Convolutions!
How can transformers with attention mechanism overcome limitations in convolutional models?
6. An Overview of ResNet and its Variants
What is needed to construct a visual backbone?
7. Visual Language Model Content: Generating Radiology Reports via Memory-driven Transformer
How was the original idea for the visual language modeling described?
8. Revealing BART : A denoising objective for pretraining
How does BART contribute to the fine-tuning of the Language Model within Noorallahzadeh et al.'s framework?
9. Natural Language Generation Content: Generating Radiology Reports via Memory-driven Transformer
How were recent radiology report generation conducted using memory-driven transformers?
10. Progressive Transformer-Based Generation of Radiology Reports
How may image-to-text-to-text be leveraged to generate radiology reports?