Past Recording
ShareStar
Layerwise Learning for Quantum Neural Networks
Tuesday Sep 22 2020 14:00 GMT
Please to join the live chat.
Layerwise Learning for Quantum Neural Networks
Why This Is Interesting

Recent advances in quantum computing hardware have made it possible to run first algorithms on experimental quantum devices. These quantum computers, referred to as noisy intermediate-scale quantum (NISQ) devices, still have a small number of qubits and no error correction. One type of algorithm that is believed to cope well with the limitations of NISQ devices are quantum neural networks (QNNs). In this talk, we are going to explore what QNNs are, why they also suffer from vanishing gradients like their classical counterparts, and introduce a new method to dampen the effect of vanishing gradients in QNNs called layerwise learning.

Discussion Points
  • How are layers in a quantum neural net different than a quantum one?
  • What is the relationship between quantum/ hardware noise and this issue?
  • How is the vanishing gradient issue different in a quantum neural net vs a classical one?
Takeaways
  • quantum neural networks run into the same issue of vanishing gradients during training as the classical counterparts, but for a different reason
  • some previous work has shown that the choice of cost function (global is worse than local) and the number qubits (layers) involved in the quantum neural network (more is worse) determines when/ how the training runs into the plateau
  • Andrea and co have shown that freezing the circuit and training one part at a time (layerwise) can result in better training time and accuracy. they demonstrate this on MNIST data set in a “quantum simulator”
Time of Recording: Tuesday Sep 22 2020 14:00 GMT