Next Article in Journal / Special Issue
Necessary Optimality Conditions for a Class of Control Problems with State Constraint
Previous Article in Journal
Quantum Mean-Field Games with the Observations of Counting Type
Previous Article in Special Issue
A Turnpike Property of Trajectories of Dynamical Systems with a Lyapunov Function
Article

Boltzmann Distributed Replicator Dynamics: Population Games in a Microgrid Context

1
School of Telecommunications Engineering, Universidad Santo Tomás, 110311 Bogotá D.C., Colombia
2
Department of Electric and Electronic Engineering, Universidad Nacional de Colombia, 111321 Bogotá D.C., Colombia
3
Department of Systems and Industrial Engineering, Universidad Nacional de Colombia, 111321 Bogotá D.C., Colombia
*
Author to whom correspondence should be addressed.
Received: 30 October 2020 / Revised: 17 December 2020 / Accepted: 4 January 2021 / Published: 15 January 2021
(This article belongs to the Special Issue Optimal Control Theory)
Multi-Agent Systems (MAS) have been used to solve several optimization problems in control systems. MAS allow understanding the interactions between agents and the complexity of the system, thus generating functional models that are closer to reality. However, these approaches assume that information between agents is always available, which means the employment of a full-information model. Some tendencies have been growing in importance to tackle scenarios where information constraints are relevant issues. In this sense, game theory approaches appear as a useful technique that use a strategy concept to analyze the interactions of the agents and achieve the maximization of agent outcomes. In this paper, we propose a distributed control method of learning that allows analyzing the effect of the exploration concept in MAS. The dynamics obtained use Q-learning from reinforcement learning as a way to include the concept of exploration into the classic exploration-less Replicator Dynamics equation. Then, the Boltzmann distribution is used to introduce the Boltzmann-Based Distributed Replicator Dynamics as a tool for controlling agents behaviors. This distributed approach can be used in several engineering applications, where communications constraints between agents are considered. The behavior of the proposed method is analyzed using a smart grid application for validation purposes. Results show that despite the lack of full information of the system, by controlling some parameters of the method, it has similar behavior to the traditional centralized approaches. View Full-Text
Keywords: Boltzmann distribution; economic dispatch problem; Multi-Agent Systems; population dynamics; revision protocol Boltzmann distribution; economic dispatch problem; Multi-Agent Systems; population dynamics; revision protocol
Show Figures

Figure 1

MDPI and ACS Style

Chica-Pedraza, G.; Mojica-Nava, E.; Cadena-Muñoz, E. Boltzmann Distributed Replicator Dynamics: Population Games in a Microgrid Context. Games 2021, 12, 8. https://0-doi-org.brum.beds.ac.uk/10.3390/g12010008

AMA Style

Chica-Pedraza G, Mojica-Nava E, Cadena-Muñoz E. Boltzmann Distributed Replicator Dynamics: Population Games in a Microgrid Context. Games. 2021; 12(1):8. https://0-doi-org.brum.beds.ac.uk/10.3390/g12010008

Chicago/Turabian Style

Chica-Pedraza, Gustavo, Eduardo Mojica-Nava, and Ernesto Cadena-Muñoz. 2021. "Boltzmann Distributed Replicator Dynamics: Population Games in a Microgrid Context" Games 12, no. 1: 8. https://0-doi-org.brum.beds.ac.uk/10.3390/g12010008

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop