Staff member

Ismael Tito Freire González

PhD Student
Synthetic, Perceptive, Emotive and Cognitive Systems (SPECS)

Staff member publications

Freire, I. T., Moulin-Frier, C., Sanchez-Fibla, M., Arsiwalla, X. D., Verschure, P., (2018). Modeling the formation of social conventions in multi-agent populations ARXIV Computer Science, (Multiagent Systems), 1-30

In order to understand the formation of social conventions we need to know the specific role of control and learning in multi-agent systems. To advance in this direction, we propose, within the framework of the Distributed Adaptive Control (DAC) theory, a novel Control-based Reinforcement Learning architecture (CRL) that can account for the acquisition of social conventions in multi-agent populations that are solving a benchmark social decision-making problem. Our new CRL architecture, as a concrete realization of DAC multi-agent theory, implements a low-level sensorimotor control loop handling the agent's reactive behaviors (pre-wired reflexes), along with a layer based on model-free reinforcement learning that maximizes long-term reward. We apply CRL in a multi-agent game-theoretic task in which coordination must be achieved in order to find an optimal solution. We show that our CRL architecture is able to both find optimal solutions in discrete and continuous time and reproduce human experimental data on standard game-theoretic metrics such as efficiency in acquiring rewards, fairness in reward distribution and stability of convention formation.

Keywords: Computer Science, Multiagent Systems

Freire, I. T., Arsiwalla, X. D., Puigbò, J. Y., Verschure, P., (2018). Limits of multi-agent predictive models in the formation of social conventions Frontiers in Artificial Intelligence and Applications (ed. Falomir, Z., Gibert, K., Plaza, E.), IOS Press (Amsterdam, The Netherlands) Volume 308: Artificial Intelligence Research and Development, 297-301

A major challenge in cognitive science and AI is to understand how intelligent agents might be able to predict mental states of other agents during complex social interactions. What are the computational principles of such a Theory of Mind (ToM)? In previous work, we have investigated hypotheses of how the human brain might realize a ToM of other agents in a multi-agent social scenario. In particular, we have proposed control-based cognitive architectures to predict the model of other agents in a game-theoretic task (Battle of the Exes). Our multi-layer architecture implements top-down predictions from adaptive to reactive layers of control and bottom-up error feedback from reactive to adaptive layers. We tested cooperative and competitive strategies among different multi-agent models, demonstrating that while pure RL leads to reasonable efficiency and fairness in social interactions, there are other architectures that can perform better in specific circumstances. However, we found that even the best predictive models fall short of human data in terms of stability of social convention formation. In order to explain this gap between humans and predictive AI agents, in this work we propose introducing the notion of trust in the form of mutual agreements between agents that might enhance stability in the formation of conventions such as turn-taking.

Keywords: Cognitive Architectures, Game Theory, Multi-Agent Models, Reinforcement Learning, Theory of Mind

Arsiwalla, X. D., Signorelli, C. M., Puigbo, J. Y., Freire, I. T., Verschure, P., (2018). What is the physics of intelligence? Frontiers in Artificial Intelligence and Applications (ed. Falomir, Z., Gibert, K., Plaza, E.), IOS Press (Amsterdam, The Netherlands) Volume 308: Artificial Intelligence Research and Development, 283-286

In the absence of a first-principles definition, the concept of intelligence is often specified in terms of its phenomenological functions as a capacity or ability to solve problems autonomously. Whenever an agent, biological or artificial, possesses this ability, it is considered intelligent, otherwise not. While this description serves as a useful correlate of intelligence, it is far from a principled explanation that provides a general, yet precise definition along with predictions of mechanisms leading to intelligent behavior. We do not want an explanation to depend on any functionality that itself might be a consequence of intelligence. A possible conceptualization of a function-free approach might be to formulate the concept in terms of dynamical information complexity. This constitute a first step towards a statistical mechanics theory of intelligence. In this paper, we outline the steps towards a physics-based definition of intelligence.

Keywords: Complexity, Information Theory, Physics of Intelligence