Train Once, Get a Family: State-Adaptive Balances for Offline-to-Online Reinforcement Learning

NeurIPS 2023

Spotlight

1Department of Automation, BNRist, Tsinghua University
2Beijing Academy of Artificial Intelligence (BAAI)
3Independent Researcher
*Indicates Equal Contribution

Indicates Corresponding Authors
MY ALT TEXT

The Illustrative Framework of Our Family Offline-to-Online RL(FamO2O). FamO2O trains a policy family from datasets and selects policies state-adaptively using online feedback. Easily integrated, FamO2O statistically enhances existing algorithms‘ performance

TL;DR

We propose FamO2O, a simple yet effective framework that empowers existing offline-to-online RL algorithms to determine state-adaptive improvement-constraint balances. FamO2O utilizes a universal model to train a family of policies with different improvement/constraint intensities, and a balance model to select a suitable policy for each state.

Abstract

Offline-to-online reinforcement learning (RL) is a training paradigm that combines pre-training on a pre-collected dataset with fine-tuning in an online environment. However, the incorporation of online fine-tuning can intensify the well-known distributional shift problem. Existing solutions tackle this problem by imposing a policy constraint on the policy improvement objective in both offline and online learning. They typically advocate a single balance between policy improvement and constraints across diverse data collections. This one-size-fits-all manner may not optimally leverage each collected sample due to the significant variation in data quality across different states. To this end, we introduce Family Offline-to-Online RL (FamO2O), a simple yet effective framework that empowers existing algorithms to determine state-adaptive improvement-constraint balances. FamO2O utilizes a universal model to train a family of policies with different improvement/constraint intensities, and a balance model to select a suitable policy for each state. Theoretically, we prove that state-adaptive balances are necessary for achieving a higher policy performance upper bound. Empirically, extensive experiments show that FamO2O offers a statistically significant improvement over various existing methods, achieving state-of-the-art performance on the D4RL benchmark.

Methods


MY ALT TEXT

FamO2O’s inference process. For each state \(\mathbf{s}\), the balance model \(\pi_b\) computes a state-adaptive balance coefficient \(\beta_\mathbf{s}\). Based on \(\mathbf{s}\) and \(\beta_\mathbf{s}\), the universal model \(\pi_u\) outputs an action \(\mathbf{a}\).




MY ALT TEXT

State-wise adaptivity visualization in a simple maze environment. (a) Higher data quality at the crossing point in the 5th row compared to the 2nd row. (b) Colors denote different balance coefficient values at traversed cells during inference. FamO2O typically displays conservative (or radical) behavior at cells with high-quality (or low-quality) data




Experimental Evaluations


MY ALT TEXT

Comparisons between our FamO2O against various competitors on D4RL normalized scores. All methods are tested on D4RL Locomotion and AntMaze for 6 random seeds. FamO2O achieves state-of-the-art performance by a statistically significant margin among all the competitors in offline-to-online RL(i.e. IQL, Balaned Replay (BR), CQL, AWAC, and TD3+BC), online RL(i.e. SAC), and behavior cloning (BC).




MY ALT TEXT

Enhanced performance achieved by FamO2O after online fine-tuning. We evaluate the D4RL normalized score of standard base algorithms (including AWAC and IQL, denoted as "Base") in comparison to the base algorithms augmented with FamO2O (referred to as "Ours").




For a detailed overview of our study, including both qualitative and quantitative results, please refer to our paper.

BibTeX

@misc{wang2023train,
        title={Train Once, Get a Family: State-Adaptive Balances for Offline-to-Online Reinforcement Learning}, 
        author={Shenzhi Wang and Qisen Yang and Jiawei Gao and Matthieu Gaetan Lin and Hao Chen and Liwei Wu and Ning Jia and Shiji Song and Gao Huang},
        year={2023},
        eprint={2310.17966},
        archivePrefix={arXiv},
        primaryClass={cs.LG}
}