Models built with deep neural network (DNN) can handle complicated real-world data extremely well, without suffering from the curse of dimensionality or the non-convex optimization. To contribute to the theoretical understanding of deep learning, we will investigate the nonparametric aspects of DNNs by addressing the following questions: (i) what kind of data can be best learned by deep neural networks？ (ii) can deep neural networks achieve the statistical optimality? (iii) is there any algorithmic guarantee for obtaining such optimal neural networks? Our theoretical analysis applies to two most fundamental setup in practice: regression and classification.
嘉宾简介 About the Speaker
Guang Cheng is currently Presidential Chair Professor in The Chinese University of Hong Kong, Shenzhen, while being on leave from Purdue University. His research interests include deep learning theory, statistical machine learning and high dimensional statistics. His academic achievements have been acknowledged by NSF-CAREER award, Fellow in Institute of Mathematical Statistics, Simons Fellow in Mathematics and Noether Young Scholar award. Prof. Cheng obtained his bachelor in Economics from Tsinghua University, and then earned his PhD of Statistics from University of Wisconsin-Madison in 2006. He has published extensively on prestigious conferences and journals such as NeurIPS, Annals of Statistics and IEEE-Information Theory. He is also associate editor in the renowned journals such as Journal of American Statistical Association, Canadian Journal of Statistics and Journal of Blockchain Research.