Publications

by Keyword: Computer Science


By year:[ 2019 | 2018 | 2017 | 2016 | 2015 | 2014 | 2013 | 2012 | 2011 | 2010 | 2009 | 2008 | 2007 | 2006 | 2005 ]

Blancas-Muñoz, M., Vouloutsi, Vasiliki, Zucca, R., Mura, Anna, Verschure, P., (2018). Hints vs distractions in intelligent tutoring systems: Looking for the proper type of help ARXIV Computer Science, (Human-Computer Interaction), 1-4

The kind of help a student receives during a task has been shown to play a significant role in their learning process. We designed an interaction scenario with a robotic tutor, in real-life settings based on an inquiry-based learning task. We aim to explore how learners' performance is affected by the various strategies of a robotic tutor. We explored two kinds of(presumable) help: hints (which were specific to the level or general to the task) or distractions (information not relevant to the task: either a joke or a curious fact). Our results suggest providing hints to the learner and distracting them with curious facts as more effective than distracting them with humour.

Keywords: Computer Science, Human-Computer Interaction


Freire, I. T., Moulin-Frier, C., Sanchez-Fibla, M., Arsiwalla, X. D., Verschure, P., (2018). Modeling the formation of social conventions in multi-agent populations ARXIV Computer Science, (Multiagent Systems), 1-30

In order to understand the formation of social conventions we need to know the specific role of control and learning in multi-agent systems. To advance in this direction, we propose, within the framework of the Distributed Adaptive Control (DAC) theory, a novel Control-based Reinforcement Learning architecture (CRL) that can account for the acquisition of social conventions in multi-agent populations that are solving a benchmark social decision-making problem. Our new CRL architecture, as a concrete realization of DAC multi-agent theory, implements a low-level sensorimotor control loop handling the agent's reactive behaviors (pre-wired reflexes), along with a layer based on model-free reinforcement learning that maximizes long-term reward. We apply CRL in a multi-agent game-theoretic task in which coordination must be achieved in order to find an optimal solution. We show that our CRL architecture is able to both find optimal solutions in discrete and continuous time and reproduce human experimental data on standard game-theoretic metrics such as efficiency in acquiring rewards, fairness in reward distribution and stability of convention formation.

Keywords: Computer Science, Multiagent Systems