Covers: theory of Gated Recurrent Unit(GRU)
Estimated time needed to finish: 20 minutes
Questions this item addresses:
  • Where does this model come from?
How to use this item?

Read the whole paper

Author(s) / creator(s) / reference(s)
Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, Yoshua Bengio
0 comment

Gated Recurrent Unit (GRU)

Total time needed: ~2 hours
Gated recurrent units (GRUs) are a gating mechanism in recurrent neural networks, introduced in 2014 by Kyunghyun Cho et al.
Potential Use Cases
GRU (Gated Recurrent Unit) aims to solve the vanishing gradient problem which comes with a standard recurrent neural network.
Who is this for ?
Click on each of the following annotated items to see details.
VIDEO 1. Introduction to RNN
  • What is RNN?
10 minutes
LIBRARY 2. RNN from scratch
  • How to implement RNN in python from scratch?
20 minutes
ARTICLE 3. Recurrent Neural Networks (RNN) - The Vanishing Gradient Problem
  • What is the Vanishing Gradient Problem in RNN architecture?
20 minutes
VIDEO 4. GRU model
  • What is GRU and what is the math behind this model?
20 minutes
ARTICLE 5. GRU - More theory on this model
  • what is GRU model?
20 minutes
LIBRARY 6. Implementing GRU in python
  • How to implement GRU in python?
10 minutes
PAPER 7. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation (Optional- original paper)
  • Where does this model come from?
20 minutes

Concepts Covered

0 comment