Covers: implementation of Batch normalization
Estimated time needed to finish: 20 minutes
Questions this item addresses:
  • how to implement batch normalization in Keras?
How to use this item?

In the last module of the Batch normalization shortlist, we are going to learn about how to implement BN in Keras by calling tf.keras.layers.BatchNormalization . This is normally how we are going to call BN as an optimizer in different projects

Programming Languages: Python
0 comment
Recipe
publicShare
Star0

Batch normalization

Contributors
Total time needed: ~3 hours
Objectives
Learn about Batch normalization concept and math behind it
Potential Use Cases
Batch normalization is a technique for training very deep neural networks that standardizes the inputs to a layer for each mini-batch
Who is This For ?
INTERMEDIATE
Click on each of the following annotated items to see details.
VIDEO 1. Normalizing Inputs
  • How does normalization work?
  • why do we need to normalize our inputs in a neural network?
10 minutes
ARTICLE 2. Intro on mini batch gradient descent (with pseudo code)
  • What is Mini-Batch Gradient Descent?
  • How to Configure Mini-Batch Gradient Descent?
20 minutes
LIBRARY 3. Mini-batch GD from scratch in Python
  • How to implement Mini-batch GD in python?
10 minutes
ARTICLE 4. Forward propagation in neural networks
  • What is Forward propagation?
  • what is the math behind this concept?
20 minutes
LIBRARY 5. Forward propagation from scratch in Python
  • How to implement FP in python?
20 minutes
VIDEO 6. Why Does Batch Norm Work? [no math!]
  • What is Batch normalization?
  • Why Does Batch Norm Work?
15 minutes
VIDEO 7. Fitting Batch Norm Into Neural Networks [ more advanced math here! ]
  • How to fit batch norm into neural network?
13 minutes
LIBRARY 8. How to implement Batch Normalization(BN) using Python from scratch
  • How to implement Batch Normalization In Neural Networks using Python?
20 minutes
LIBRARY 9. Batch normalization in Keras
  • how to implement batch normalization in Keras?
20 minutes
PAPER 10. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift (OPTIONAL)
  • Where does this method come from?
30 minutes

Concepts Covered

0 comment