1. Pruning
- Yihui He, Xiangyu Zhang, and Jian Sun. Channel pruning for accelerating very deep neural networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 1389–1397, 2017
- Tien-Ju Yang, Andrew Howard, Bo Chen, Xiao Zhang, Alec Go, Mark Sandler, Vivienne Sze, and Hartwig Adam. Netadapt: Platform-aware neural network adaptation for mobile applications. In Proceedings ofthe European Conference on
Computer Vision (ECCV), pages 285–300, 2018.
- ‣
- ‣
- SNIP: Single-shot Network Pruning based on Connection Sensitivity - ICLR 2019
- Xiaohan Tu, Cheng Xu, Siping Liu, Renfa Li, Guoqi Xie, Jing Huang, and Laurence Tianruo Yang. Efficient monocular depth estimation for edge devices in internet of things.
IEEE Transactions on Industrial Informatics, 2020.
2. Knowledge distillation
- Distilling the Knowledge in a Neural Network
- Antonio Polino, Razvan Pascanu, and Dan Alistarh. Model compression via distillation and quantization. arXiv preprint, arXiv:1802.05668, 2018
- Andrea Pilzer, Stephane Lathuiliere, Nicu Sebe, and Elisa Ricci. Refine and distill: Exploiting cycle-inconsistency and knowledge distillation for unsupervised monocular depth estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9768–9777, 2019.
- Yifan Liu, Changyong Shu, Jingdong Wang, and Chunhua Shen. Structured knowledge distillation for dense prediction. IEEE Transactions on Pattern Analysis and Machine
Intelligence, 2020.
3. Monocular Depth Estimation
4. Tensor Decomposition
- Speeding-up Convolutional Neural Networks Using Fine-tuned CP-Decomposition (link)
- Compression of Deep Convolutional Neural Networks for Fast and Low Power Mobile Applications (link)
- Rank Selection of CP-decomposed Convolutional Layers with Variational Bayesian Matrix Factorization (link)