SMU BEST CAPSTONE PRESENTATION
PATENTED TECHNOLOGY

MARL

Multi-Agent Wind-Grid Optimizer

A reinforcement learning framework that coordinates wind curtailment, storage dispatch, and demand response to optimize Texas grid operations using historical ERCOT data.

18%
Wind Curtailment Reduction
Significant improvement in renewable energy utilization
25%
Grid Mismatch Penalty Reduction
Enhanced grid stability and operational efficiency
11.3K
MW Average Grid Mismatch
vs 15-25K MW baseline performance

System Architecture

Three-Agent Architecture

Wind Agent: Curtailment optimization
Storage Agent: Charge/discharge coordination
Load Agent: Demand response management

Training Paradigm

Centralized Training: Global state awareness during learning

Decentralized Execution: Local observations for real-time operations

Robust to communication issues, efficient for deployment

Technical Implementation

Algorithm: Deep Q-Network (DQN) with discrete control
State Space: 11-dimensional grid condition vectors
Action Space: Bounded discrete controls for real grid operations
Data Source: Historical ERCOT operational data

Real-World Impact

Texas leads U.S. wind generation but faces significant curtailment challenges

Adaptive agent-based control offers scalable path for renewable integration

Regulation-aware approach suitable for grid-scale deployment

Performance Analysis

Grid Mismatch Reduction

MARL System
11.3K MW
Baseline (Low)
15K MW
Baseline (High)
25K MW

Wind Curtailment Comparison

MARL System
24-26%
Traditional Control
35-40%

Research Methodology

Data Preparation

Historical ERCOT operational data processing and scenario generation

Environment Design

Simulated grid environment with realistic constraints and dynamics

Agent Training

Centralized training with reward structures for coordination

Evaluation

Historical replays and stress scenario testing against baselines

Strategic Significance

This research addresses critical challenges in renewable energy integration for the Texas grid, demonstrating how multi-agent reinforcement learning can provide scalable, regulation-aware solutions for optimizing variable renewable resources while maintaining grid stability and reducing operational costs.

← Back to Neural Network Home