Published 2023 | Version v1
Publication

Warm Start Fitted Q Reinforcement Learning for Electric Vehicle Depot Charging

Description

Electric Vehicle (EV) charging coordination is gaining more interest for better integrating EV depots into the electrical grid. Several solutions exist, ranging from rule-based control to, more recently, Reinforcement Learning (RL) techniques. Batch RL approaches are particularly interesting for their ability to use past data for training the controller. However, such solutions typically work with action space discretization, which does leverage continuous charging set-points of Electric Vehicle Supply Equipments (EVSEs), hindering the scalability of solutions. In this paper, we leverage dual annealing global optimization algorithm to pick continuous actions from a neural network RL agent trained with fitted Q-iteration with synthetic data coming from a custom depot simulator. Results for one year simulation of a 10 EVSEs depot are reported and compared with good results with random policy over several criteria.

Additional details

Identifiers

URL
https://hdl.handle.net/11567/1212916
URN
urn:oai:iris.unige.it:11567/1212916

Origin repository

Origin repository
UNIGE