Prenters: Andrew Davidson, Lin Chen, Fan Gao, Milad Bkhshizadeh, Shuaiwen Wang
On Consistency and Sparsity for Principal Components Analysis in High Dimensions
Minimax sparse principal subspace estimation in high dimensions
Optimal detection of sparse principal components in high dimension
Do semidefinite relaxations solve sparse PCA up to the information limit?
Presenters: Yixin Wang, Timothy Jones, Sihan Huang, Phyllis Wan, Morgane Austern
A nonparametric view of network models and Newman–Girvan and other modularities
Minimax rates of community detection in stochastic block models
Optimal Rates for Community Estimation in the Weighted Stochastic Block Model
Presenters: Jing Wu, Adji Bousso Dieng, Wenda Zhou, Gabriel Loaiza Ganem, Promit Ghosal
Sample Complexity of Dictionary Learning and Other Matrix Factorizations
Sparse and Unique Nonnegative Matrix Factorization Through Data Preprocessing
When Does Non-Negative Matrix Factorization Give a Correct Decomposition into Parts?
Presenters: Yulin Yao, Pratyay Datta, Tong Li, Jin Hyung Lee, Ding Zhou
Minimax-Optimal Rates For Sparse Additive Models Over Kernel Classes Via Convex Programming
Approximation of Functions of Few Variables in High Dimensions
Presenters: David Hirshberg, Kiran Vodrahalli, Rishabh Dudeja
Exact post-selection inference, with application to the lasso
Confidence intervals for low dimensional parameters in high dimensional linear models