In this paper we study online caching problems where predictions of future requests, e.g., provided by a machine learning model, are available. We consider different optimistic caching policies which are based on the Follow-The-Regularized-Leader algorithm and enjoy strong theoretical guarantees in terms of regret. These new policies have a...
-
May 28, 2023 (v1)Conference paperUploaded on: January 5, 2024
-
May 2024 (v1)Journal article
In this paper, we investigate 'optimistic' online caching policies, distinguished by their use of future request predictions derived, for example, from machine learning models. Traditional online optimistic policies, grounded in the Follow-The-Regularized-Leader (FTRL) algorithm, incur a higher computational cost compared to classic policies...
Uploaded on: November 5, 2024 -
April 2023 (v1)Journal article
In fog computing customers' microservices may demand access to connected objects, data sources and computing resources outside the domain of their fog provider In practice, the locality of connected objects renders mandatory a multi-domain approach in order to broaden the scope of resources available to a single-domain fog provider. We consider...
Uploaded on: January 7, 2024 -
November 28, 2023 (v1)Conference paper
Online learning algorithms have been successfully used to design caching policies with regret guarantees. Existing algorithms assume that the cache knows the exact request sequence, but this may not be feasible in high load and/or memory-constrained scenarios, where the cache may have access only to sampled requests or to approximate requests'...
Uploaded on: December 30, 2023 -
November 28, 2023 (v1)Conference paper
Online learning algorithms have been successfully used to design caching policies with regret guarantees. Existing algorithms assume that the cache knows the exact request sequence, but this may not be feasible in high load and/or memory-constrained scenarios, where the cache may have access only to sampled requests or to approximate requests'...
Uploaded on: December 6, 2023 -
2023 (v1)Journal article
In Federated Learning (FL), devices-also referred to as clients-can exhibit heterogeneous availability patterns, often correlated over time and with other clients. This paper addresses the problem of heterogeneous and correlated client availability in FL. Our theoretical analysis is the first to demonstrate the negative impact of correlation on...
Uploaded on: December 29, 2023 -
May 17, 2023 (v1)Conference paper
The enormous amount of data produced by mobile and IoT devices has motivated the development of federated learning (FL), a framework allowing such devices (or clients) to collaboratively train machine learning models without sharing their local data. FL algorithms (like FedAvg) iteratively aggregate model updates computed by clients on their...
Uploaded on: December 29, 2023 -
June 19, 2020 (v1)Conference paper
This paper studies the tradeoff between running cost and processing delay in order to optimally orchestrate multiple fog applications. Fog applications process batches of objects' data along chains of containerised microservice modules, which can run either for free on a local fog server or run in cloud at a cost. Processor sharing techniques,...
Uploaded on: December 4, 2022