Lower Bounds on Regret for Noisy Gaussian Process Bandit Optimization

Lower Bounds on Regret for Noisy Gaussian Process Bandit Optimization - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online.

Presented at: Conference on Learning Theory (COLT), Amsterdam, July 2017 Publication date: 2017

In this paper, we consider the problem of sequentially optimizing a black-box function $f$ based on noisy samples and bandit feedback. We assume that $f$ is smooth in the sense of having a bounded norm in some reproducing kernel Hilbert space (RKHS), yielding a commonly-considered non-Bayesian form of Gaussian process bandit optimization. We provide algorithm-independent lower bounds on the simple regret, measuring the suboptimality of a single point reported after $T$ rounds, and on the cumulative regret, measuring the sum of regrets over the $T$ chosen points. For the isotropic squared-exponential kernel in $d$ dimensions, we find that an average simple regret of $\epsilon$ requires $T = \Omega\big(\frac{1}{\epsilon^2} (\log\frac{1}{\epsilon})^{d/2}\big)$, and the average cumulative regret is at least $\Omega\big( \sqrt{T(\log T)^d} \big)$, thus matching existing upper bounds up to the replacement of $d/2$ by $d+O(1)$ in both cases. For the Matern-$\nu$ kernel, we give analogous bounds of the form $\Omega\big( (\frac{1}{\epsilon})^{2+d/\nu}\big)$ and $\Omega\big( T^{\frac{\nu + d}{2\nu + d}} \big)$, and discuss the resulting gaps to the existing upper bounds.

Keywords: Gaussian processes ; Bandits ; Online optimization ; Reproducing kernel Hilbert space ; Lower bounds ; Cumulative regret ; Simple regret ; Bayesian optimization Reference EPFL-CONF-228833

Autor: Scarlett, Jonathan; Bogunovic, Ilija; Cevher, Volkan

Fuente: https://infoscience.epfl.ch/record/228833?ln=en