Enhancing energy flexibility in cluster of buildings through coordinated energy management

This research activity involves   Capozzoli Alfonso, Brandi Silvio, Gallo Antonio, Buscemi Giacomo, Savino Sabrina

(see our Collaborations page to find out the main collaborations active on these research topics)

 

Objective of the activity

Exploits the potentialities of Deep Reinforcement Learning for district energy management

Framework of the activity:

District energy management should leverage automated algorithms capable to adapt to a changing environment and to learn from user’s behavior and historical building-related data to optimize, coordinate and control the different actors of the smart grids (e.g., producers, service providers, consumers). However, the computational complexity associated to the district simulation and the application of advanced control strategies limits the application of model-based techniques such as Model Predictive Control (MPC).

In this perspective BAEDA lab is conducting research activities aimed at exploiting data-driven control strategy to lighten the computational complexity of the problem. A novel approach exploits the adaptive and potentially model-free nature of Deep Reinforcement Learning (DRL) to coordinate a cluster of buildings.

Figure: Methodological framework for the testing of Deep Reinforcement learning  control algorithms at district level

Relevant publications on this topic:

Deltetto, D., Coraci, D., Pinto, G., Piscitelli, M.S., Capozzoli, A. (2021). Exploring the Potentialities of Deep Reinforcement Learning for Incentive-Based Demand Response in a Cluster of Small Commercial Buildings. Energies.

Pinto, G., Deltetto, D., Capozzoli, A. (2021). Data-driven district energy management with surrogate models and deep reinforcement learning. Applied Energy

 Pinto, G., Piscitelli, M. S., Vázquez-Canteli, J. R., Nagy, Z., Capozzoli, A. (2021). Coordinated Energy Management for a cluster of buildings through Deep Reinforcement LearningEnergy.

Pinto, G., Brandi, S., Capozzoli, A., Vázquez-Canteli, J. R., Nagy, Z. (2020). Towards Coordinated Energy Management in Buildings using Deep Reinforcement Learning. 15th SDEWES Conference 2020 Cologne

 

Objective of the activity:

Developing a data-driven framework for advanced control of building energy systems at energy community level.

Framework of the activity:

The overcoming of the traditional way of producing and consuming energy towards a more sustainable energy management has shifted the need of flexibility from the generation side to the demand side. In this context, the Energy Community is the new paradigm where prosumers can acquire a more active role while interacting with the grid by aggregating their loads and generation profiles. Energy Communities can then be seen as a means for optimizing the energy management in smart grids, with positive effects for the members, who can decrease their energy cost, and for the grid, which can benefit from the provided flexibility. Recent studies have proved how coordinated control architecture for energy management in cluster of buildings is effective at achieving such objective. Nonetheless, the development of control strategies and of digital twins at the district level for testing them is particularly demanding due to high complexity of the control problem and its high computational cost. 

To cope with these research challenges BAEDA Lab develops generalizable simulation environments for Energy Communities as virtual testbeds for control strategies. The environments are used for the evaluation of advanced control strategies in terms of achievable energy flexibility and energy cost saving for data-driven energy communities, de facto bridging the gap that is currently characterizing the research.  

Figure: Framework for advanced control of building energy systems at energy community level

Relevant publications on this topic:

Gallo, A., Capozzoli, A. (2024). The role of advanced energy management strategies to operate flexibility sources in Renewable Energy Communities. Energy and Buildings

Gallo, A., Piscitelli, M. S., Fenili, L., Capozzoli, A. (2023). RECsim—Virtual Testbed for Control Strategies Implementation in Renewable Energy Communities. In International Conference on Sustainability in Energy and Buildings

 

Objective of the activity:

The study aims to identify the advantages and disadvantages of various control architectures in relation to the case study and objective function.

Framework of the activity:

District Energy Management (DEM) employs different approaches to optimize energy use across buildings, including:

  • Coordinated Management (Centralized): A single agent oversees and controls all buildings.
  • Decentralized Management: Each building operates independently, managing its own control system.
  • Cooperative Management: Buildings manage themselves autonomously while simultaneously leveraging a shared architecture to make decisions that consider global states.

BAEDA Lab is investigating a range of agent-based architectures—including centralized, hierarchical, distributed, and cooperative models with attention mechanisms—to comprehensively assess their strengths and weaknesses for practical implementation.

Figure: Centralized, Decentralized, and Cooperative Architectures of an Energy System Controller in a Cluster of Buildings

Relevant publications on this topic:

Pinto, G., Kathirgamanathan, A., Mangina, E., Finn, D. P., Capozzoli, A. (2022). Enhancing energy management in grid-interactive buildings: A comparison among cooperative and coordinated architecturesApplied Energy.

Objective of the activity:
Optimize the operation and management of Multi-Energy Systems modeled as Energy-Hub, to improve efficiency, reduce costs, and enhance sustainability.

Framework of the activity:
In response to the growing complexity of energy systems, Energy Hubs have emerged as a powerful solution for managing multiple energy carriers-electricity, heating, and cooling. These systems integrate diverse energy sources and technologies, including renewable energy, energy storage, and traditional generation systems, balancing their flow to meet varying demands while optimizing overall performance. However, modeling Energy Hubs presents significant challenges, particularly due to the non-linearity of system interactions and the complexity of managing multiple, interdependent constraints across different energy carriers. Traditional optimization techniques, such as Mixed-Integer Linear Programming (MILP) and other classic methods, have been widely applied in Energy Hub optimization. While these methods can be effective in certain contexts, they struggle to capture the non-linear dynamics that are intrinsic to modern energy systems. For instance, the interactions between renewable energy sources, energy storage, and conventional generation systems often exhibit complex, non-linear behaviors that cannot be accurately represented with linear approximations. As a result, conventional optimization models, which rely on linear or piecewise linear formulations, are limited in their ability to provide reliable and accurate solutions for such complex systems. Additionally, designing optimizers that can handle the non-linearity and the multitude of constraints in Energy Hubs is highly challenging, as it requires making compromises between accuracy, feasibility, and computational complexity. Reinforcement learning (RL) algorithms present a promising solution to these challenges. Unlike traditional optimization techniques, RL algorithms are not constrained by the need for explicit mathematical formulations of the system. They can dynamically learn optimal control policies through interaction with the environment, making them well-suited for handling the non-linear, dynamic, and uncertain nature of Energy Hub systems. RL-based methods can seamlessly incorporate complex constraints and adapt to real-time changes in system behavior, leading to more accurate and efficient decision-making. By embedding RL-based controls into Energy Hub management, these systems can effectively capture and optimize the non-linear interactions between energy carriers, providing a more robust and scalable approach to managing modern energy networks. This methodology offers a significant advantage over traditional optimization techniques, enabling the development of adaptive, efficient, and sustainable solutions for Energy Hub management.

Figure: Layout of an Energy Hub