Covers: theory of Vanishing Gradient

- What is the Vanishing Gradient Problem in RNN architecture?

Read the whole article

URL: https://www.superdatascience.com/blogs/recurrent-neural-networks-rnn-the-vanishing-gradient-problem

SuperDataScience Team

0 comment

Contributors

- Objectives
- Gated recurrent units (GRUs) are a gating mechanism in recurrent neural networks, introduced in 2014 by Kyunghyun Cho et al.
- Potential Use Cases
- GRU (Gated Recurrent Unit) aims to solve the vanishing gradient problem which comes with a standard recurrent neural network.
- Who is This For ?
- INTERMEDIATE

Click on each of the following **annotated items** to see details.

Resources5/7

VIDEO 1. Introduction to RNN

- What is RNN?

10 minutes

LIBRARY 2. RNN from scratch

- How to implement RNN in python from scratch?

20 minutes

ARTICLE 3. Recurrent Neural Networks (RNN) - The Vanishing Gradient Problem

- What is the Vanishing Gradient Problem in RNN architecture?

20 minutes

VIDEO 4. GRU model

- What is GRU and what is the math behind this model?

20 minutes

ARTICLE 5. GRU - More theory on this model

- what is GRU model?

20 minutes

LIBRARY 6. Implementing GRU in python

- How to implement GRU in python?

10 minutes

PAPER 7. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation (Optional- original paper)

- Where does this model come from?

20 minutes

0 comment