Using AI and Game Theory in the Near-RT-RIC

The Network Intelligence and Automation Lab addresses a critical challenge in the evolution of Open RAN architecture: resolving runtime conflicts between control applications (xApps) within the Near-Real-Time RAN Intelligent Controller (Near-RT-RIC).


Background

Open RAN promotes vendor-neutral innovation by decoupling software and hardware components in mobile networks. However, deploying independently developed xApps in a shared Near-RT-RIC environment introduces the risk of conflicting actions, which can impair Quality of Service (QoS), destabilize network performance, or cause unintended service disruptions.


Our Contributions

  • Conflict Taxonomy
    We categorize conflict scenarios into direct, resource-based, and policy-driven types, providing a foundation for systematic resolution strategies.
  • Game-Theoretic Models
    We apply non-cooperative game theory to model and analyze xApp interactions, enabling conflict resolution through rational, incentive-compatible mechanisms.
  • QACM Framework
    Our QoS-Aware Conflict Mitigation (QACM) framework dynamically manages xApp execution priorities based on QoS indicators and network state.
    Read the full paper in IEEE TGCN 2024
  • Conflict Benchmarking
    We introduce benchmark scenarios to test and evaluate mitigation strategies in energy- and mobility-sensitive use cases.
    See preprint on arXiv 2025

Key Publications

  1. IEEE iThings 2023
    Conflict Management in the Near-RT-RIC of Open RAN: A Game Theoretic Approach
    Proposes a non-cooperative game-theoretic model to address runtime xApp conflicts in the Near-RT-RIC.
    Read the paper
  2. IEEE TGCN 2024
    QACM: QoS-Aware xApp Conflict Mitigation in Open RAN
    Introduces a QoS-driven framework to score and mitigate xApp conflicts based on real-time network conditions.
    Read the paper
  3. arXiv 2025
    xApp-Level Conflict Mitigation in O-RAN, a Mobility Driven Energy Saving Case
    Demonstrates the application of QACM in mobility scenarios involving energy-saving xApps.
    Read the preprint

Ongoing Research Directions

  • Integrating reinforcement learning for predictive conflict resolution and pre-emption
  • Developing collaborative and multi-agent game-theoretic models for conflict-aware orchestration
  • Deploying digital twin environments for testing and simulation of conflict scenarios

Collaboration

We actively collaborate with industry partners and open-source communities, including the O-RAN Alliance and the O-RAN Software Community (OSC), to evaluate, refine, and implement our conflict mitigation approaches in real-world deployments.