Staff member


Klaudia Grechuta

Postdoctoral Researcher
Synthetic, Perceptive, Emotive and Cognitive Systems (SPECS)
kgrechuta@ibecbarcelona.eu

Staff member publications

Grechuta, K., Rubio Ballester, B., Espín Munne, R., Usabiaga Bernal, T., Molina Hervás, B., Mohr, B., Pulvermüller, F., San Segundo, R., Verschure, P., (2019). Augmented dyadic therapy boosts recovery of language function in patients with nonfluent aphasia Stroke 50, (5), 1270-1274

Background and Purpose- Evidence suggests that therapy can be effective in recovering from aphasia, provided that it consists of socially embedded, intensive training of behaviorally relevant tasks. However, the resources of healthcare systems are often too limited to provide such treatment at sufficient dosage. Hence, there is a need for evidence-based, cost-effective rehabilitation methods. Here, we asked whether virtual reality-based treatment grounded in the principles of use-dependent learning, behavioral relevance, and intensity positively impacts recovery from nonfluent aphasia. Methods- Seventeen patients with chronic nonfluent aphasia underwent intensive therapy in a randomized, controlled, parallel-group trial. Participants were assigned to the control group (N=8) receiving standard treatment or to the experimental group (N=9) receiving augmented embodied therapy with the Rehabilitation Gaming System for aphasia. All Rehabilitation Gaming System for aphasia sessions were supervised by an assistant who monitored the patients but did not offer any elements of standard therapy. Both interventions were matched for intensity and materials. Results- Our results revealed that at the end of the treatment both groups significantly improved on the primary outcome measure (Boston Diagnostic Aphasia Examination: control group, P=0.04; experimental group, P=0.01), and the secondary outcome measure (lexical access-vocabulary test: control group, P=0.01; experimental group, P=0.007). However, only the Rehabilitation Gaming System for aphasia group improved on the Communicative Aphasia Log ( P=0.01). The follow-up assessment (week 16) demonstrated that while both groups retained vocabulary-related changes (control group, P=0.01; experimental group, P=0.007), only the Rehabilitation Gaming System for aphasia group showed therapy-induced improvements in language ( P=0.01) and communication ( P=0.05). Conclusions- Our results demonstrate the effectiveness of Rehabilitation Gaming System for aphasia for improving language and communication in patients with chronic aphasia suggesting that current challenges faced by the healthcare system in the treatment of stroke might be effectively addressed by augmenting traditional therapy with computer-based methods. Clinical Trial Registration- URL: https://www.clinicaltrials.gov . Unique identifier: NCT02928822.

Keywords: Aphasia, Embodied training, Neurological rehabilitation, Virtual reality


Herreros, Ivan, Miquel, Laia, Blithikioti, Chrysanthi, Nuño, Laura, Rubio Ballester, Belen, Grechuta, Klaudia, Gual, Antoni, Balcells-Oliveró, Mercè, Verschure, P., (2019). Motor adaptation impairment in chronic cannabis users assessed by a visuomotor rotation task Journal of Clinical Medicine 8, (7), 1049

Background—The cerebellum has been recently suggested as an important player in the addiction brain circuit. Cannabis is one of the most used drugs worldwide, and its long-term effects on the central nervous system are not fully understood. No valid clinical evaluations of cannabis impact on the brain are available today. The cerebellum is expected to be one of the brain structures that are highly affected by prolonged exposure to cannabis, due to its high density in endocannabinoid receptors. We aim to use a motor adaptation paradigm to indirectly assess cerebellar function in chronic cannabis users (CCUs). Methods—We used a visuomotor rotation (VMR) task that probes a putatively-cerebellar implicit motor adaptation process together with the learning and execution of an explicit aiming rule. We conducted a case-control study, recruiting 18 CCUs and 18 age-matched healthy controls. Our main measure was the angular aiming error. Results—Our results show that CCUs have impaired implicit motor adaptation, as they showed a smaller rate of adaptation compared with healthy controls (drift rate: 19.3 +/− 6.8° vs. 27.4 +/− 11.6°; t(26) = −2.1, p = 0.048, Cohen’s d = −0.8, 95% CI = (−1.7, −0.15)). Conclusions—We suggest that a visuomotor rotation task might be the first step towards developing a useful tool for the detection of alterations in implicit learning among cannabis users.

Keywords: Cerebellum, Cannabis, Implicit motor learning, Motor adaptation, Visuomotor rotation


Grechuta, Klaudia, Ulysse, Laura, Rubio Ballester, Belén, Verschure, Paul, (2019). Self beyond the body: Action-driven and task-relevant purely distal cues modulate performance and body ownership Frontiers in Human Neuroscience 13, Article 91

Our understanding of body ownership largely relies on the so-called Rubber Hand Illusion (RHI). In this paradigm, synchronous stroking of the real and the rubber hands leads to an illusion of ownership of the rubber hand provided that it is physically, anatomically, and spatially plausible. Self-attribution of an artificial hand also occurs during visuomotor synchrony. In particular, participants experience ownership over a virtual or a rubber hand when the visual feedback of self-initiated movements follows the trajectory of the instantiated motor commands, such as in the Virtual Hand Illusion (VHI) or the moving Rubber Hand Illusion (mRHI). Evidence yields that both when the cues are triggered externally (RHI) and when they result from voluntary actions (VHI and mRHI), the experience of ownership is established through bottom-up integration and top-down prediction of proximodistal cues (visuotactile or visuomotor) within the peripersonal space. It seems, however, that depending on whether the sensory signals are externally (RHI) or self-generated (VHI and mRHI), the top-down expectation signals are qualitatively different. On the one hand, in the RHI the sensory correlations are modulated by top-down influences which constitute empirically induced priors related to the internal (generative) model of the body. On the other hand, in the VHI and mRHI body ownership is actively shaped by processes which allow for continuous comparison between the expected and the actual sensory consequences of the actions. Ample research demonstrates that the differential processing of the predicted and the reafferent information is addressed by the central nervous system via an internal (forward) model or corollary discharge. Indeed, results from the VHI and mRHI suggest that, in action-contexts, the mechanism underlying body ownership could be similar to the forward model. Crucially, forward models integrate across all self-generated sensory signals including not only proximodistal (i.e., visuotactile or visuomotor) but also purely distal sensory cues (i.e., visuoauditory). Thus, if body ownership results from a consistency of a forward model, it will be affected by the (in)congruency of purely distal cues provided that they inform about action-consequences and are relevant to a goal-oriented task. Specifically, they constitute a corrective error signal. Here, we explicitly addressed this question. To test our hypothesis, we devised an embodied virtual reality-based motor task where action outcomes were signaled by distinct auditory cues. By manipulating the cues with respect to their spatial, temporal and semantic congruency, we show that purely distal (visuoauditory) feedback which violates predictions about action outcomes compromises both performance and body ownership. These results demonstrate, for the first time, that body ownership is influenced by not only externally and self-generated cues which pertain to the body within the peripersonal space but also those arising outside of the body. Hence, during goal-oriented tasks body ownership may result from the consistency of forward models.

Keywords: Body ownership, Internal forward model, Goal-oriented behavior, Multisensory integration, Top-down prediction


Vouloutsi, V., Grechuta, K., Verschure, P., (2019). Evaluation of the facial expressions of a humanoid robot Biomimetic and Biohybrid Systems 8th International Conference, Living Machines 2019 (Lecture Notes in Computer Science) , Springer International Publishing (Nara, Japan) 11556, 365-368

Facial expressions are salient social features that crucial in communication, and humans are capable of reading the messages faces convey and the emotions they display. Robots that interact with humans will need to employ similar communication channels for successful interactions. Here, we focus on the readability of the facial expressions of a humanoid robot. We conducted an online survey where participants evaluated emotional stimuli and assessed the robot’s expressions. Results suggest that the robot’s facial expressions are correctly recognised and the appraisal of the robots expressive elements are consistent with the literature.

Keywords: Emotion recognition, Facial expressions, Human-robot interaction