Covers: theory of Soft-label dataset distillation
0
Questions this item addresses:
  • How can we train a model using data distillation and relaxing the hard labels to a probability distribution?
How to use this item?

read intro, experiements, and extending data distillation

Author(s) / creator(s) / reference(s)
Ilia Sucholutsky, Matthias Schonlau
0 comment
Recipe
publicShare
Star0

Learning N Classes From M<N Samples

Contributors
Total time needed: ~28 minutes
Objectives
Learn about developments in deep learning research for few-shot learning
Potential Use Cases
In the scenario you too have less than optimal sample sizes!
Who is This For ?
ADVANCEDIf you're already familiar and have worked with NNs in the past, and looking to navigate suboptimal sample sizes
Click on each of the following annotated items to see details.
PAPER 1. Dataset Distillation
  • Is it possible to train a model on synthetic data out of the manifolds of the original data?
  • How much data is encoded in a given training set and how compressible it is?
18 minutes
PAPER 2. Soft-Label Dataset Distillation and Text Dataset Distillation
  • How can we train a model using data distillation and relaxing the hard labels to a probability distribution?
10 minutes
PAPER 3. ‘Less Than One’-Shot Learning: Learning N Classes From M<N Samples
  • How can we learn from a dataset that has less classes than what we're hoping to classify?
10 minutes

Concepts Covered

0 comment