A group of students will read papers on interesting risk behaviors of prediction models in modern overparameterized regimes (e.g. deep learning).
- July 18 (M): Reconciling modern machine-learning practice and the classical bias–variance trade-off by Belkin et al.
- July 21 (R): Understanding Deep Learning (Still) Requires Rethinking Generalization by Zhang et al. (Discussion led by Lloyd Goldstein and Xuerong Wang)
- July 25 (M): The Implicit Bias of Gradient Descent on Separable Data by Soudry et al. (Discussion led by Arkobrato Gupta and Yumo Peng)
- July 28 (R): Two Models of Double Descent for Weak Features by Belkin et al. (Discussion led by Haotian Xie and Sayoni Roychowdhury)
- August 1 (M): How Many Variables Should be Entered in a Regression Equation? by Breiman and Freedman (Read and discussed by all)