Posts

2024 Summer Reading

A group of students will read papers on case influence, model sensitivity and related topics.

2023 Autumn Statistical Learning Reading Group

Statistical Learning reading group will meet this semester from 11:00am to 12:30 pm in Scott Lab N056 every other Tuesday starting from September 5th. Students may register for course credit by enrolling in STAT 8750.01. If you are not officially enrolled but want to be added to the reading group email list, please contact Haozhen Yu at yu.2823@osu.edu.

This semester we will read the book, Reinforcement Learning: an Introduction by Sutton and Barto.

  • September 5: Haozhen Yu, Torey Hilbert and Qian Zhou will discuss Chapters 1 and 2.
  • September 19: Fangyi Wang and Xuerong Wang will discuss Chapter 3.
  • October 3: Wenxin Du and Rezoanoor Rahman will discuss Chapter 4.
  • October 17: Arkajyoti Bhattacharjee and Ningyi Liu will discuss Chapter 5.
  • October 31: Yingyu Cheng and Meghna Kalra will discuss Chapter 5.
  • November 14: Yue Ma and Zhenbang Jiao will discuss Chapter 6.
  • November 28: Biqing Yang and Xinyu Zhang will discuss Chapter 6.

Weekly Topics and Presenters (SP23 Stat 8750.01, Part II)

In the second part of the term we will learn a little bit about Conformal Prediction. Here is the schedule of presentations.

Weekly Topics and Presenters (SP23 Stat 8750.01, Part I)

Time Presenter(s) Topic
19-Jan Meijia Shao Motivation, concepts, definitions, basic examples
26-Jan Fangyi Wang Randomized responses
2-Feb Zhizhen Zhao Laplace mechanism
9-Feb Xuerong Wang Post-processing, group privacy, composition
16-Feb Chenze Li Exponential mechanism
23-Feb Zhenbang Jiao Report noisy max
2-Mar Fangyi Wang Gaussian Differential Privacy (JRSS-B discussion paper)
9-Mar Yuan Zhang Gaussian Differential Privacy (JRSS-B discussion paper)

2023 Spring Statistical Learning Reading Group (Part I)

The first part of the reading group (8 meetings before spring break) will focus on the topic of differential privacy (DP).  Here are some materials for your reference (mostly “top-k” Google results).

Textbooks
Dwork & Roth book
Salil Vadhan tutorial
Websites
Marco Gaboardi course
Yu-Xiang Wang course
Wasserman slides
JH-SV course
AS-JU course

We will meet weekly in Enarson 258 on Thursdays, 11:30–12:25.

Students may register for course credit by enrolling in STAT 8750.01. If you are not officially enrolled but want to be added to the reading group email list, please contact Haozhen Yu at yu.2823@osu.edu.

2022 Autumn Reading

Statistical Learning reading group will meet this semester after a long hiatus! We will meet  from 11:30am to 12:30 pm in Cockins Hall Room 212 every other Tuesday starting from September 6th. Students may register for course credit by enrolling in STAT 8750.01. If you are not officially enrolled but want to be added to the reading group email list, please contact Haozhen Yu at yu.2823@osu.edu.

September 13: Seminar – Arnab Auddy

Time and Location: September 13 (Tuesday) 11:30am-12:30pm in CH 212

Speaker: Arnab Auddy (Columbia University)

Title: Why and how to use orthogonally decomposable tensors for statistical learning

Abstract: As we encounter more and more complex data generating mechanisms, it becomes necessary to model higher order interactions among the observed variables. Orthogonally decomposable tensors provide a unified framework for such modeling in a number of interesting statistical problems. While this is a natural extension of matrix SVD to tensors, they automatically provide much better identifiability properties. Moreover, a small perturbation affects each singular vector in isolation, and hence their recovery does not depend on the gap between consecutive singular values. In addition to the attractive statistical properties, the tensor decomposition problem in this case presents us with intriguing computational challenges. To understand these better, we will explore some statistical-computational tradeoffs, and also describe tractable methods that provide rate optimal estimators for the tensor singular vectors.

2022 Summer Reading

A group of students will read papers on interesting risk behaviors of prediction models in modern overparameterized regimes (e.g. deep learning).

April 24: Seminar – Yongdai Kim

Time and Location: 3-4pm in CH 212

Speaker: Yongdai Kim (Seoul National University, Korea)

Title: Fast learning with deep learning architectures for classification

Abstract: We derive the fast convergence rates of a deep neural network (DNN) classifier with the rectified linear unit (ReLU) activation function learned using the hinge loss. We consider three cases for a true model: (1) a smooth decision boundary, (2) smooth conditional class probability, and (3) the margin condition (i.e., the probability of inputs near the decision boundary is small). We show that the DNN classifier learned using the hinge loss achieves fast convergence rates for all three cases provided that the architecture (i.e., the number of layers, number of nodes and sparsity) is carefully selected. An important implication is that DNN architectures are very flexible for use in various cases without much modification. In addition, we consider a DNN classifier learned by minimizing the cross-entropy, and give conditions for fast convergence rates. If time is allowed, computational algorithms to achieve a right size of deep architectures for fast convergence rates is discussed.

This is joint work with Ph.D. students Ilsang Ohn and Dongha Kim.