Invited Speakers

Prof. Teena Chakkalayil Hassan

Dr. Teena Hassan is a Professor of Computer Science, with specialisation in mathematical foundations of autonomous systems at Bonn-Rhein-Sieg University of Applied Sciences since March 2023. She heads the Institute for Artificial Intelligence and Autonomous Systems in the Department of Computer Science and focuses on promoting research, teaching, and industry transfer in the domains of artificial intelligence, robotics, and autonomous systems. She completed her doctorate with summa-cum-laude at the University of Bamberg, Germany, in 2020 on the topic of automatic facial expression analysis by integrating domain knowledge with data-driven methods. From 2015 to 2018, she was a research associate at the Fraunhofer Institute for Integrated Circuits in Erlangen, Germany. There she conducted her doctoral research with a focus on applications in pain and stress recognition. Later, she extended her research to the field of human-robot interaction at the Bielefeld University and the University of Bremen to develop empathetic robots that can interact with humans through naturalistic and intuitive modalities. One of her motivations has always been to use technology for improving the quality of life of people. She lays great emphasis on human-centred aspects of interaction with robots and prioritises the responsible development of adaptive, socially assistive autonomous systems.


ZAT: Enabling Innovations in Assistive Technologies for the Future

By all accounts, the issues we face worldwide — today — as a consequence of not only an ageing society, but one where, for example, diagnoses of neurological conditions at all age groups continue to rise, have fuelled researchers to find ways to improve the lives of those affected — both directly and indirectly. The potential benefits stemming from the use of such assistive technologies is immeasurable.  This talk will provide an introduction to the work being conducted by the Center for Assistive Technologies Rhein-Ruhr (ZAT), the focus on pro-adaptivity and what this looks like in the work, home, healthcare, mobility and education domains. In addition, we will discuss a number of challenges that have posed barriers to entry in the fields of assistive technology and issue a call for closer and stronger ties amongst the many initiatives to successfully address these challenges. 


Prof. Angelo Cangelosi

Angelo Cangelosi is Professor of Machine Learning and Robotics at the University of Manchester (UK) and co-director and founder of the Manchester Centre for Robotics and AI. He was selected for the award of the European Research Council (ERC) Advanced grant (funded by UKRI). His research interests are in cognitive and developmental robotics, neural networks, language grounding, human robot-interaction and trust, and robot companions for health and social care. Overall, he has secured over £38m of research grants as coordinator/PI, including the ERC Advanced eTALK, the UKRI TAS Trust Node and CRADLE Prosperity, the US AFRL project THRIVE++, and numerous Horizon and MSCAs grants. Cangelosi has produced more than 300 scientific publications. He is Editor-in-Chief of the journals Interaction Studies and IET Cognitive Computation and Systems, and in 2015 was Editor-in-Chief of IEEE Transactions on Autonomous Development. He has chaired numerous international conferences, including ICANN2022 Bristol, and ICDL2021 Beijing. His book “Developmental Robotics: From Babies to Robots” (MIT Press) was published in January 2015, and translated in Chinese and Japanese. His latest book “Cognitive Robotics” (MIT Press), coedited with Minoru Asada, was recently published in 2022.


Cognitive Robotics: From Babies to Robots and AI

This talk introduces the concept of Cognitive Robotics, i.e. the field that combines insights and methods from AI, as well as cognitive and biological sciences, to robotics (cf. Cangelosi & Asada 2022 for book open access). This is a highly interdisciplinary approach that sees AI computer scientists and roboticists collaborating closely with psychologists and neuroscientists. We will use the case study of language learning to demonstrate this highly interdisciplinary field, presenting developmental psychology studies on children’s language acquisition and robots’ experiment on language learning. Growing theoretical and experimental psychology research on action and language processing and on number learning and gestures in children and adults clearly demonstrates the role of embodiment in cognition and language processing. In psychology and neuroscience, this evidence constitutes the basis of embodied cognition, also known as grounded cognition. In robotics and AI, these studies have important implications for the design of linguistic capabilities, in particular language understanding, in robots and machines for human-robot collaboration. This focus on language acquisition and development uses Developmental Robotics methods, as part of the wider Cognitive Robotics approach. During the talk we will present examples of developmental robotics models and experimental results with the baby robot iCub and with the Pepper robot. One study focuses on the embodiment biases in early word acquisition and grammar learning. The same developmental robotics method is used for experiments on pointing gestures and finger counting to allow robots to learning abstract concepts such as numbers. We will then present a novel developmental robotics model, and human-robot interaction experiments, on Theory of Mind and its relationship to trust. This considers both people’s Theory of Mind of robots’ capabilities, and robot’s own ‘Artificial Theory of Mind’ of people’s intention. Results show that trust and collaboration is enhanced when we can understand the intention of the other agents and when robots can explain to people their decision making strategies. The implications for the use of such cognitive robotics approaches for embodied cognition in AI and cognitive sciences, and for robot companion applications will also be discussed. The talk will also consider philosophy of science issues on embodiment and on machine’s understanding of language, the ethical issues of trustworthy AI and robots, and the limits of current big-data large language models.


Prof. Paulo Menezes

Paulo Menezes is affiliated with the Department of Electrical and Computer Engineering at the University of Coimbra, where he currently serves as a scientific committee and board member. As a lecturer, he is responsible for courses on Computer Architecture, Computer Graphics and Augmented Reality, and Interactive Systems and Robotics. Additionally, he is a senior researcher at the Institute of Systems and Robotics, leading the Mobile Robotics Laboratory and heading the Immersive Systems and Sensorial Stimulation Laboratory. Within these laboratories, he engages in diverse activities encompassing Affective Computing, Human-Robot Interaction, Virtual and Augmented Reality, and Human Behaviour Analysis. He has been actively involved in European Project in fields of social robots, and assistive systems. He is a member of IEEE and its affiliated societies IEEE-RAS, IEEE-SMC, IEEE-CIS, currently serving as the Chair of the Portuguese Chapter of the IEEE Robotics and Automation Society, and is a member of the Digital Reality Technical Community Steering Committee.


Building Trust: Human-Aware Robots in Shared Spaces

The integration of social robots into everyday environments, such as homes and public spaces, requires careful consideration of how these robots interact with people, particularly those with frailty or reduced mobility. Drawing from research that highlights the importance of eye contact and kinematic responses in human interactions, can robots be developed so that they can use similar “body language” to signal awareness and improve perceived safety and trust among users. This talk will delve into our research group’s ongoing efforts to develop robots that not only recognize human presence but also respond in socially appropriate ways, enhancing their trustworthiness and acceptance.  These abilities have implications on the design of robots that coexist with humans in close quarters, ensuring they are seen as both useful and non-threatening. By equipping robots with the capacity to demonstrate awareness and interpret social cues, we aim to bridge the gap between functional utility and social compatibility, making robots more adaptable and accepted in environments shared with people.


Dr. Gloria Beraldo

Gloria Beraldo is Researcher at the Institute of Cognitive Sciences and Technologies, Italian National Research Council and contract professor at the University of Padova. Her research is focused on exploring and designing novel shared autonomy approaches with particular attention to brain-computer interface (BCI)-driven robotics devices. Her interests include HRI, shared control and shared autonomy, telepresence robots, cognitive architectures, neurorobotics, socially assistive robotics, and intelligent systems.


The growing elderly population necessitates urgent solutions for bridging the gap between assistive care in domestic and hospital settings and the difficulties in health care management (e.g., costs, lack of staffs, facilities). Addressing this challenge, socially assistive robotics (SARs), in particular telepresence robots, emerge as a promising tool for providing personalized assistance facilitating the caregivers and the physiotherapists via video calls and in-place mobility. While telepresence effectively enables remote care, it relies on a remote operator. This presentation will offer an overview of our ongoing projects aiming to enhance commercial social robots with supplementary AI-based services that keep the elderly engaged in the absence of a caregiver, encourage them during the rehabilitative and cognitive sessions, acquire vital parameters, and predict critical and/or dangerous events. The contextual inference and the person’s status allow the identification of potentially critical phenomena that can form the basis of personalized services for the assisted person in order to create the most favorable environmental conditions and to ensure monitoring, support, and coaching services for a healthy and active lifestyle. This information is provided as input to an automatic reasoning module that, allows for continuous updating with respect to the actual situation, facilitates the identification of the set of high-level goals to be pursued in the environment and toward the caregiver, and allows for the generation of a plan of actions to achieve them.