Wind Farm Control via Offline Reinforcement Learning with Adversarial Training
Published in IEEE Transactions on Automation Science and Engineering, 2025
Recommended citation: Huang Y, Zhao X. Wind Farm Control via Offline Reinforcement Learning with Adversarial Training[J]. IEEE Transactions on Automation Science and Engineering, 2025. https://ieeexplore.ieee.org/abstract/document/10912435
TIn a wind farm, wakes produced by upstream wind turbines significantly diminish the wind capture of downstream ones, resulting in reduced power generation for the entire farm. Reinforcement learning (RL) control can alleviate the wake effect by enhancing coordination among turbines in arrays. However, training such a collaborative control policy through online RL necessitates millions of interactions with a computational fluid dynamics-based simulator, making it computationally expensive. Wind farm operators possess extensive datasets from past operations, which can tackle the sample generation difficulty. Nevertheless, online RL struggles with the direct use of these offline datasets due to covariate shift and biased value estimation. In this paper, we introduce an offline RL method named Multi-Agent Offline Behavior Imitation (MAOBI) to cope with this challenge. First, by employing f-GANs (Generative Adversarial Networks), MAOBI estimates the divergence between the learning policy and the behavior policy based on samples generated by them. By minimizing this divergence, MAOBI can mimic the behavior policy hidden in the offline datasets. The method then identifies state-action pairs that yield high returns, further improving the control policy. Results demonstrate that MAOBI-trained control policies achieve performance comparable to state-of-the-art online RL methods when deployed in a high-fidelity wind farm simulator.
If you are interested in this paper, please cite it as:
@article{huang2025wind,
title={Wind Farm Control via Offline Reinforcement Learning with Adversarial Training},
author={Huang, Yubo and Zhao, Xiaowei},
journal={IEEE Transactions on Automation Science and Engineering},
year={2025},
publisher={IEEE}
}