How Satisfactory Can Deep Reinforcement Learning Methods Simulate Electricity Market Dynamics? Bechmarking via Bi-level Optimization

Nick Harder (Albert Ludwigs University of Freiburg); Lesia Mitridati (Technical University of Denmark); Farzaneh Pourahmadi (Technical University of Denmark); Anke Weidlich (University of Freiburg); Jalal Kazempour (Technical University of Denmark)

Abstract

Various factors make electricity markets increasingly complex, making their analysis challenging. This complexity demands advanced analytical tools to manage and understand market dynamics. This paper explores the application of deep reinforcement learning (DRL) and bi-level optimization models to analyze and simulate electricity markets. We introduce a bi-level optimization framework incorporating realistic market constraints, such as non-convex operational characteristics and binary decision variables, to establish an upper-bound benchmark for evaluating the performance of DRL algorithms. The results confirm that DRL methods do not reach the theoretical upper bounds set by the bi-level models, thereby confirming the effectiveness of the proposed model in providing a clear performance target for DRL. This benchmarking approach demonstrates DRL’s current capabilities and limitations in complex market environments but also aids in developing more effective DRL strategies by providing clear, quantifiable targets for improvement. The proposed method can also identify the information gap cost since DRL methods operate under more realistic conditions than optimization techniques, given that they don’t need to assume complete knowledge about the system. This study thus provides a foundation for future research to enhance market understanding and possibly its efficiency in the face of increasing complexity in the electricity market. Our methodology’s effectiveness is further validated through a large-scale case study involving 150 power plants, demonstrating its scalability and applicability to real-world scenarios.