Papers
arxiv:2209.14414

Optimistic Posterior Sampling for Reinforcement Learning with Few Samples and Tight Guarantees

Published on Sep 28, 2022
Authors:
,
,
,
,
,
,
,

Abstract

OPSRL, an optimistic posterior sampling algorithm for reinforcement learning, achieves near-optimal regret bounds by using a logarithmic number of posterior samples per state-action pair and leveraging a new anti-concentration inequality for Dirichlet distributions.

AI-generated summary

We consider reinforcement learning in an environment modeled by an episodic, finite, stage-dependent Markov decision process of horizon H with S states, and A actions. The performance of an agent is measured by the regret after interacting with the environment for T episodes. We propose an optimistic posterior sampling algorithm for reinforcement learning (OPSRL), a simple variant of posterior sampling that only needs a number of posterior samples logarithmic in H, S, A, and T per state-action pair. For OPSRL we guarantee a high-probability regret bound of order at most mathcal{O}(H^3SAT) ignoring polylog(HSAT) terms. The key novel technical ingredient is a new sharp anti-concentration inequality for linear forms which may be of independent interest. Specifically, we extend the normal approximation-based lower bound for Beta distributions by Alfers and Dinges [1984] to Dirichlet distributions. Our bound matches the lower bound of order Ω(H^3SAT), thereby answering the open problems raised by Agrawal and Jia [2017b] for the episodic setting.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2209.14414 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2209.14414 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.