Our research addresses the critical challenge of restoring electricity in the aftermath of disasters, where conventional grid infrastructure is often compromised. We propose an innovative solution that leverages the rapidly growing fleet of Electric Vehicles (EVs) as a decentralized, mobile energy resource. Through Vehicle-to-Grid (V2G) technology, this distributed network of batteries can be coordinated to supply power, significantly enhancing community resilience. The core of our project is an AI-assisted coordination framework, which uses Multi-Agent Reinforcement Learning (MARL) to intelligently manage this system.
In this framework, we model each EV as an autonomous agent that learns optimal charging and discharging strategies through interaction with a simulated post-disaster environment. These agents are trained to pursue a dual-objective goal: to minimize the economic costs while maximizing the amount of power delivered to support disaster-stricken areas. This is achieved by developing a novel reward mechanism that incentivizes agents to charge during low-cost periods and discharge when the grid is most in need, thereby aligning private economic interests with the public good. This decentralized, AI-assisted approach is inherently more robust and scalable than traditional centralized control methods, making it ideally suited for the chaotic and unpredictable conditions of a post-disaster scenario.
The significance of this work lies in its potential to transform disaster response, providing a faster and more adaptive method for power restoration that can save lives and mitigate economic loss. Looking forward, our research will focus on scaling our simulations to model more complex, real-world urban environments, exploring more advanced reinforcement learning algorithms to enhance inter-agent cooperation, and developing hardware-in-the-loop testbeds to validate our findings.