Qinglin Meng
I am a PhD candidate in Computer Science at Purdue University (2022–present), advised by Steve Hanneke. I received my B.S. in Mathematics from Tsinghua University in 2022.
My research focuses on machine learning theory, reinforcement learning, and large language models. I am broadly interested in the theoretical foundations of machine learning, including statistical learning theory, online learning, and algorithms with provable guarantees.
News
- 2026 — Paper accepted at ACL 2026: Towards Intrinsic Interpretability of Large Language Models
- 2025 — Paper accepted at ICML 2025: Representation Preserving Multiclass Agnostic to Realizable Reduction
- 2024 — Paper published in JMLR: Learning Dynamic Mechanisms in Unknown Environments
Selected Publications
* Equal contribution
Learning Dynamic Mechanisms in Unknown Environments: A Reinforcement Learning Approach
Shuang Qiu*, Boxiang Lyu*, Qinglin Meng*, Zhaoran Wang, Zhuoran Yang, Michael I. Jordan
Journal of Machine Learning Research (JMLR), 25(397):1–73, 2024 · Paper
Towards Intrinsic Interpretability of Large Language Models: A Survey of Design Principles and Architectures
Yutong Gao*, Qinglin Meng*, Yuan Zhou, Liangming Pan
ACL 2026 Main Conference · Paper
An Optimal Sauer Lemma Over ℓ-ary Alphabets
Steve Hanneke, Qinglin Meng, Shay Moran, Amirreza Shaeiri
arXiv preprint, 2026 · Paper · (alphabetical order)
Representation Preserving Multiclass Agnostic to Realizable Reduction
Steve Hanneke, Qinglin Meng, Amirreza Shaeiri
ICML 2025 · Paper · (alphabetical order)
