AI & Society
AI & Society
This theme addresses the interplay between AI technologies and society, focusing on human-centred approaches to Responsible AI. Central to this theme is the importance of ensuring AI systems are trustworthy, inclusive, and designed with human needs in mind. For society to flourish it is imperative that we develop and evaluate interactive systems to ensure that they are usable, safe, secure and socially just, enhancing the quality of life of their users. Our work spans application areas like cybersecurity, health, and education, using AI-based tools such as recommender systems and chatbots. This theme promotes AI technologies that enhance societal well-being while ensuring security, fairness, and ethical integrity. Our work ranges from developing systems that facilitate positive interactions with technology to assessing potential harms and mitigating biases in data-driven systems.
People
Professor Alexandra Cristea
Professor of Computer ScienceDr Sunčica Hadžidedić
Assistant ProfessorDr Stamos Katsigiannis
Assistant ProfessorProfessor Effie Lai-Chong Law
Professor in Human-Computer InteractionDr. Frederick Li
Associate ProfessorProfessor Dorothy Monekosso
Professor of Computer ScienceDr Jingyun Wang
Assistant ProfessorResearch Highlights
Empathetic interactions in human-machine communication
Shauna Concannon
Conversational AI systems (e.g. Virtual Personal Assistants, chatbots such as Replika and ChatGPT) are increasingly utilised in more socially complex contexts. Developing systems that can respond appropriately to emotional content and employ empathetic language more convincingly is an active research area. Assessing the way systems create perceptions of empathy brings together a range of technological, psychological, and ethical considerations that merit greater scrutiny. The ability to create the perception of empathy human-machine communication is achieved in large part through linguistic behaviour and yet existing approaches to evaluation play little attention to how language is used. In this project we examine the strategies used by conversational AI systems to create the perception of empathy using approaches from interactional linguistics and develop a framework for measuring perceived empathy in the context of human-machine communication.
Photo by Andy Kelly on Unsplash.
Conversational Agents for Older Adults (CA4OA)
Effie Lai-Chong Law
Trustworthy conversational agents (CAs) can help older adults (OAs) as a flexible means to access basic services such as online banking where the shift to CAs is fast. However, OAs tend not to adopt CAs due to the lack of trust and cannot benefit from them. Consequently, OAs may become digitally marginalised and excluded from digitalised society. It is timely to study with interdisciplinary approaches how design choices of CAs (modality, embodiment, anthropomorphism) and OAs’ mental models, attributes (e.g., gender) and conditions (e.g., loneliness) are related to trust in CAs. Design guidelines and prototypes for OA-specific CAs will be delivered. CA4OA is a research project funded by UKRI Trustworthy Autonomous Systems (TAS) Hub.