Communication Dans Un Congrès Année : 2024

Sample-efficient reinforcement learning for environments with rare high-reward states

Daniel Mastropietro
Urtzi Ayesta

Résumé

We introduce FVAC (Fleming-Viot Actor-Critic), an algorithm for efficient learning of optimal policies in reinforcement learning problems with rare, high-reward states. FVAC uses Actor-Critic policy gradient, with the critic estimated via the socalled Fleming-Viot particle system, a stochastic process used to model population evolution which is able to boost the visit frequency of the rare states. This frequency boosting is obtained by forcing exploration outside a set of states identified as highly visited during an initial exploration of the environment. The only requirements of the method are that learning must be set under the average reward criterion, and that a black-box simulator or emulator can be run on the environment. We showcase the method's performance in windy grid worlds, where a non-zero reward is only observed at a terminal cell, which is difficult to reach due to the wind. Our results show that FVAC learns significantly faster than standard reinforcement learning algorithms based on Monte-Carlo exploration with temporal difference learning.

Fichier principal
Vignette du fichier
FVAC-EWRL-2024-cameraready.pdf (1.29 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
licence

Dates et versions

hal-04917977 , version 1 (28-01-2025)

Licence

Identifiants

  • HAL Id : hal-04917977 , version 1

Citer

Daniel Mastropietro, Urtzi Ayesta, Matthieu Jonckheere. Sample-efficient reinforcement learning for environments with rare high-reward states. 17th European Workshop on Reinforcement Learning, Oct 2024, Toulouse, France. ⟨hal-04917977⟩
0 Consultations
0 Téléchargements

Partager

More