Published May 28, 2023
| Version v1
Conference paper
Optimistic Online Caching for Batched Requests
- Others:
- Ecole Nationale Superieure de Lyon
- Inria Sophia Antipolis - Méditerranée (CRISAM) ; Institut National de Recherche en Informatique et en Automatique (Inria)
- Network Engineering and Operations (NEO ) ; Inria Sophia Antipolis - Méditerranée (CRISAM) ; Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)
- This research was supported in part by Inria under the exploratory action MAMMALS
Citation
APA
Description
In this paper we study online caching problems where predictions of future requests, e.g., provided by a machine learning model, are available. We consider different optimistic caching policies which are based on the Follow-The-Regularized-Leader algorithm and enjoy strong theoretical guarantees in terms of regret. These new policies have a higher computational cost than classic ones like LRU, LFU, as each update of the cache state requires to solve a constrained optimization problem. We study then their performance when the cache is updated less frequently in order to amortize the update cost over time or over multiple requests.
Abstract
International audience
Additional details
- URL
- https://inria.hal.science/hal-04367129
- URN
- urn:oai:HAL:hal-04367129v1
- Origin repository
- UNICA