Covers: theory of GPT-3
Estimated time needed to finish: 25 minutes
Questions this item addresses:
  • What is GPT-3 ?
  • How is GPT-3 different from previous transformer based architectures?
  • How GPT-3 uses Few shot learning and zero shot learning to eliminate fine-tuning and the need for large task specific datasets?
How to use this item?

Read entire Section 1

Author(s) / creator(s) / reference(s)
Tom B. Brown et. al
Andrew Berry.
Wrong URL link to the paper. Needs to be fixed!
Andrew Berry.
https://arxiv.org/pdf/2005.14165.pdf
Sharvari Dhote.
Thank you.
Recipe
publicShareStar

Understanding the paper : Persistent Anti-Muslim Bias in Large Language Models

Collaborators
Total time needed: ~2 hours
Objectives
This will help users to understand that how large language models such as GPT-3 capture racial bias
Potential Use Cases
Debiasing the language model
Who is this for ?
INTERMEDIATENLP Data scientist from all audience levels
Click on each of the following annotated items to see details.
ARTICLE 1. Bias in Machine learning models
  • What is bias and sources of bias in ML models?
10 minutes
ARTICLE 2. Algorithmic Bias in Natural language processing models
  • What are type of biases and algorithmic bias?
30 minutes
PAPER 3. Understanding the GPT-3
  • What is GPT-3 ?
  • How is GPT-3 different from previous transformer based architectures?
  • How GPT-3 uses Few shot learning and zero shot learning to eliminate fine-tuning and the need for large task specific datasets?
25 minutes
ARTICLE 4. Persistent Anti-Muslim Bias in Large Language Models
  • What type of bias is found in GPT-3?
  • How to debias GPT-3 by introducing positive words and phrases?
15 minutes

Concepts Covered

0 comment