Covers: theory of RMSProp
Questions this item addresses:
  • Why rprop does not work with mini-batches
  • Further developments of rmsprop
How to use this item?

In this module, we are going to learn more about the concept of RMSprop and its use cases as an optimizer. ( combining module 4 and 5 would give you a good idea about the theory of RMSprop) . In the next module,​ we are going to implement RMPSprop in python

Author(s) / creator(s) / reference(s)
0 comment


Total time needed: ~48 minutes
Learning about RMSprop concept, math behind it and code it in python
Potential Use Cases
RMSprop is a gradient based optimization technique used in training neural networks
Who is This For ?
Click on each of the following annotated items to see details.
ARTICLE 1. Intro to mathematical optimization
  • What is mathematical optimization?
  • Why do we need to optimize a cost function in ML algorithms?
10 minutes
VIDEO 2. Gradient Descent
  • What is Gradient Decent(GD)?
  • How does GD work in python?
10 minutes
LIBRARY 3. Gradient Descent in Python
  • How to implement Gradient Descent in Python?
20 minutes
VIDEO 4. RMSprop
  • What is RMSprop?
  • How does this algorithm work?
8 minutes
ARTICLE 5. RMSprop: Divide the gradient by a running average of its recent magnitude
  • Why rprop does not work with mini-batches
  • Further developments of rmsprop
10 minutes
LIBRARY 6. RMSprop from scratch in Python
  • how to code RMSprop in python from scratch?
10 minutes
LIBRARY 7. Implement RMSprop in Keras
  • How to implement RMSprop in Keras?
20 minutes

Concepts Covered

0 comment