Published October 4, 2022 | Version v1
Publication

Constant regret for sequence prediction with limited advice

Description

We investigate the problem of cumulative regret minimization for individual sequence prediction with respect to the best expert in a finite family of size K under limited access to information. We assume that in each round, the learner can predict using a convex combination of at most p experts for prediction, then they can observe a posteriori the losses of at most m experts. We assume that the loss function is range-bounded and exp-concave. In the standard multi-armed bandits setting, when the learner is allowed to play only one expert per round and observe only its feedback, known optimal regret bounds are of the order O(√ KT). We show that allowing the learner to play one additional expert per round and observe one additional feedback improves substantially the guarantees on regret. We provide a strategy combining only p = 2 experts per round for prediction and observing m ≥ 2 experts' losses. Its randomized regret (wrt. internal randomization of the learners' strategy) is of order O (K/m) log(Kδ −1) with probability 1 − δ, i.e., is independent of the horizon T ("constant" or "fast rate" regret) if (p ≥ 2 and m ≥ 3). We prove that this rate is optimal up to a logarithmic factor in K. In the case p = m = 2, we provide an upper bound of order O(K 2 log(Kδ −1)), with probability 1 − δ. Our strategies do not require any prior knowledge of the horizon T nor of the confidence parameter δ. Finally, we show that if the learner is constrained to observe only one expert feedback per round, the worst-case regret is the "slow rate" Ω(√ KT), suggesting that synchronous observation of at least two experts per round is necessary to have a constant regret.

Additional details

Created:
December 4, 2022
Modified:
December 1, 2023