•  
  •  
 

SMU Data Science Review

Abstract

Abstract. The integration of large-scale wind power into modern electrical grids presents persistent challenges due to variability, curtailment, and compliance with operational constraints. This study proposes a multi-agent reinforcement learning (MARL) framework for optimizing wind energy distribution within the Texas power grid. The system employs three specialized agents—managing wind curtailment, storage utilization, and load adjustments—to collaboratively balance supply and demand under dynamic grid conditions. Using historical operational data from the Electric Reliability Council of Texas (ERCOT), the framework was trained and evaluated on a range of scenarios encompassing both typical and extreme operating conditions. Results demonstrate substantial performance improvements compared to baseline dispatch strategies, including a measurable reduction in supply–demand mismatch, improved storage state-of-charge stability, and enhanced coordination among agents. The approach offers a scalable, adaptable, and regulation-compliant pathway for renewable integration in grids with high penetration of variable energy resources.

Creative Commons License

Creative Commons Attribution-Noncommercial 4.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial 4.0 License

Share

COinS