07/21/2017 afternoon, 2:00 pm - 5:00 pm
Hawaii Convention Center, Honolulu, Hawaii. (in conjunction with IEEE CVPR 2017)
While many sophisticated models are developed for visual information processing, very few pay attention to their usability in the presence of (heavy) data quality degradations. Most successful models are trained and evaluated on high quality visual datasets. On the other hand, the data source often cannot be assured of high quality in practical scenarios. For example, video surveillance systems have to rely on cameras of very limited definitions, due to the prohibitive costs of installing high-definition cameras all around, leading to the practical need to recognize objects reliably from very low resolution images. Other quality factors, such as occlusion, motion blur, missing data and label ambiguity, are also ubiquitous in the wild.
The tutorial will present a comprehensive and in-depth review, on the recent advances in the robust sensing, processing and understanding of low-quality visual data. First, we will introduce how the image/video restoration models (e.g., denoising, deblurring, super-resolution) can be enhanced, by incorporating various problem structures and priors. Next, we will show how the image/video restoration and the visual recognition could be jointly optimized. Such an end-to-end optimization consistently achieves the superior performance over the traditional multi-stage pipelines. We will also demonstrate how the above discussed approaches benefit real-world applications. Furthermore, we will address an increasingly important issue in using big visual data for machine learning, where the available dataset does not contain high-quality labels and thus weak, noisy or otherwise low-quality labels need to be exploited intelligently for desired outcome.
As the low data quality appears to be the bottleneck for numerous applications, such as visual recognition, object tracking, medical image processing and 3D vision, our proposed tutorial is expected to be of broad interests to the CVPR community.
The slides, more project websites and codes will be updated soon after the tutorial.
1. H. Zhang, J. Yang, Y. Zhang, N. M. Nasrabadi, T. S. Huang, “Close the loop: Joint blind image restoration and recognition with sparse representation prior”, In Proceedings of International Conference on Computer Vision (ICCV) 2011. 2. H. Zhang, D. Wipf, “Non-Uniform Camera Shake Removal Using a Spatially-Adaptive Sparse Penalty”, Advances in Neural Information and Processing Systems (NIPS) 2013. 3. H. Zhang, D. Wipf, Y. Zhang, “Multi-Observation Blind Deconvolution with an Adaptive Sparse Prior”, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI). 36(8): 1628- 1643 (2014). 4. H. Zhang, L. Carin, “Multi-shot Imaging: Joint Alignment, Deblurring, and Resolution Enhancement”, In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014. 5. D. Wipf, H. Zhang, “Revisiting Bayesian blind deconvolution”, Journal of Machine Learning Research (JMLR) 15(1): 3595-3634 (2014).
1. B. Li, X. Peng, Z. Wang, J. Xu, and D. Feng, “AOD-Net: All-in-One Dehazing Network”, In Proceedings of IEEE International Conference on Computer Vision (ICCV), 2017. 2. D. Liu, Z. Wang, Y. Fan, X. Liu, Z. Wang, S. Chang, and T. Huang, “Robust Video Super-Resolution with Learned Temporal Dynamics”, In Proceedings of IEEE International Conference on Computer Vision (ICCV), 2017. 3. D. Liu, B. Wen, X. Liu, and T. Huang, “When Image Denoising Meets High-Level Vision Tasks: A Deep Learning Approach”, arXiv, 2017. 4. B. Cheng, Z. Wang, Z. Zhang, Z. Li, D. Liu, J. Yang, S. Huang, and T. Huang, “Robust Emotion Recognition from Low Quality and Low Bit Rate Video: A Deep Learning Approach”, In Proceedings of the 7-th Conference on Affective Computing and Intelligent Interaction (ACII), 2017. 5. Z. Wang, S. Chang, Y. Yang, D. Liu and T. Huang, "Studying Very Low Resolution Recognition Using Deep Networks", In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
1. Yucheng Li, Jianchao Yang, Yale Song, Liangliang Cao, Jia Li and Jiebo Luo, "Learning from Noisy Labels with Distillation," IEEE Conference on Computer Vision (ICCV), Venice, Italy, October 2017. 2. Quanzeng You, Hailin Jin, Jianchao Yang, Jiebo Luo, "Building a Large-Scale Dataset for Image Emotion Recognition: The Fine Print and The Benchmark," The 30th AAAI Conference on Artificial Intelligence (AAAI), Phoenix, AZ, January 2016. 3. Quanzeng You, Jiebo Luo, Hailin Jin, and Jianchao Yang, "Robust Image Sentiment Analysis using Progressively Trained and Domain Transferred Deep Networks," The Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI), Austin, TX, January 25-30, 2015. 4. Lixin Duan, Dong Xu, Ivor Tsang, Jiebo Luo, “Visual Event Recognition in Videos by Learning from Web Data,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, June 2010. (Best Student Paper) 5. Jingen Liu, Jiebo Luo, Mubarak Shah, "Recognizing Realistic Actions from Videos in the Wild," IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Miami, FL, June 2009.