讲座:A Broader View of Thompson Sampling 发布时间:2025-11-19

  • 活动时间:
  • 活动地址:
  • 主讲人:

题 目:A Broader View of Thompson Sampling

嘉 宾:Yanlin Qu, Postdoctoral Researcher, Columbia Business School

主持人:唐卓栋 助理教授 上海交通大学安泰经济与管理学院

时 间:2025年11月26日(周三)14:00-15:30

地 点:安泰楼A511室

内容简介:

Thompson Sampling is one of the most widely used and studied bandit algorithms, known for its simple structure, low regret performance, and solid theoretical guarantees. Yet, in stark contrast to most other families of bandit algorithms, the exact mechanism through which posterior sampling (as introduced by Thompson) is able to "properly" balance exploration and exploitation, remains a mystery. In this talk we show that the core insight to address this question stems from recasting Thompson Sampling as an online optimization algorithm. To distill this, a key conceptual tool is introduced, which we refer to as "faithful" stationarization of the regret formulation. Essentially, the finite horizon dynamic optimization problem is converted into a stationary counterpart which "closely resembles" the original objective (in contrast, the classical infinite horizon discounted formulation, that leads to the Gittins index, alters the problem and objective in too significant a manner). The newly crafted time invariant objective can be studied using Bellman's principle which leads to a time invariant optimal policy. When viewed through this lens, Thompson Sampling admits a simple online optimization form that mimics the structure of the Bellman-optimal policy, and where greediness is regularized by a measure of residual uncertainty based on point-biserial correlation. This answers the question of how Thompson Sampling balances exploration-exploitation, and moreover, provides a principled framework to study and further improve Thompson's original idea.

演讲人简介:

Yanlin Qu is a postdoctoral research scholar in the Decision, Risk, and Operations Division at Columbia Business School, working with Assaf Zeevi and Hongseok Namkoong. He earned his PhD in Management Science and Engineering from Stanford University, advised by Peter Glynn and Jose Blanchet. At the interface of Operations Research and Machine Learning, his research synergizes methods from both fields to study stochastic systems and their associated decision making problems, such as analyzing Markov chains via deep learning and understanding Bayesian bandits via online optimization.

 

欢迎广大师生参加!