Human-Computer Interaction (HCI) is inherently an interdisciplinary field. With our mixed disciplinary backgrounds, members of this integrate frameworks of Computer Science with those of Psychology, Education, Information Management, Life Science, etc., to pursue research pertaining to human-centred AI products and service. Sharing the core aim of AIHS, we work on a range of projects where digital technologies are deployed for social good. Examples to illustrate our exciting work with emerging technologies are: Extended reality (XR) for augmenting learning experience; Emotion recognition and adaptive UI for enhancing people’s wellbeing; Personalised health apps for behavioural changes; Trust in conversational AI for customer service.
Professor Alexandra CristeaProfessor of Computer Science
Dr Sunčica HadžidedićAssistant Professor
Dr Stamos KatsigiannisAssistant Professor
Professor Effie Lai-Chong LawProfessor in Human-Computer Interaction
Dr. Frederick LiAssociate Professor
Professor Dorothy MonekossoProfessor of Computer Science
Dr Jingyun WangAssistant Professor
Empathetic interactions in human-machine communication
Conversational AI systems (e.g. Virtual Personal Assistants, chatbots such as Replika and ChatGPT) are increasingly utilised in more socially complex contexts. Developing systems that can respond appropriately to emotional content and employ empathetic language more convincingly is an active research area. Assessing the way systems create perceptions of empathy brings together a range of technological, psychological, and ethical considerations that merit greater scrutiny. The ability to create the perception of empathy human-machine communication is achieved in large part through linguistic behaviour and yet existing approaches to evaluation play little attention to how language is used. In this project we examine the strategies used by conversational AI systems to create the perception of empathy using approaches from interactional linguistics and develop a framework for measuring perceived empathy in the context of human-machine communication.
Conversational Agents for Older Adults (CA4OA)
Effie Lai-Chong Law
Trustworthy conversational agents (CAs) can help older adults (OAs) as a flexible means to access basic services such as online banking where the shift to CAs is fast. However, OAs tend not to adopt CAs due to the lack of trust and cannot benefit from them. Consequently, OAs may become digitally marginalised and excluded from digitalised society. It is timely to study with interdisciplinary approaches how design choices of CAs (modality, embodiment, anthropomorphism) and OAs’ mental models, attributes (e.g., gender) and conditions (e.g., loneliness) are related to trust in CAs. Design guidelines and prototypes for OA-specific CAs will be delivered. CA4OA is a research project funded by UKRI Trustworthy Autonomous Systems (TAS) Hub.