报告简介 Abstract
We introduce a new method for sparse principal component analysis, based on the aggregation of eigenvector information from carefully-selected axis-aligned random projections of the sample covariance matrix. Unlike most alternative approaches, our algorithm is non-iterative, so is not vulnerable to a bad choice of initialisation. We provide theoretical guarantees under which our principal subspace estimator can attain the minimax optimal rate of convergence in polynomial time. In addition, our theory provides a more refined understanding of the statistical and computational trade-off in the problem of sparse principal component estimation, revealing a subtle interplay between the effective sample size and the number of random projections that are required to achieve the minimax optimal rate. Numerical studies provide further insight into the procedure and confirm its highly competitive finite-sample performance.
嘉宾简介 About the Speaker
Tengyao is a Lecturer in Statistical Data Science in the Department of Statistical Science, University College London. Previously, he was a research fellow at Cambridge Cantab Capital Institute for the Mathematics of Information and a PhD student of Professor Richard Samworth from University of Cambridge.
讲座海报 Poster