Compact Neural Representation Using Attentive Network Pruning

Time: Wednesday 8-Jul-2020 16:00 (This is a past event.)

Speaker:
Discussion Moderator:

Artifacts

Motivation / Abstract
Pruning becomes increasingly popular as we seek to migrate deep learning to smaller mobile platforms. The paper demonstrates a pruning technique that not only outperforms baseline models, but also proves a high compression ratio is achievable with negligible loss of accuracy. It is able to achieve such results by introducing a novel hierarchical selection mechanism as the basis of pruning.
Questions Discussed
- Neural Network Compression
- Topdown selection 
- Pruning mechanisms 
Stream Categories:
 Math and Foundations