AI & Society
AI & Society
This theme addresses the interplay between AI technologies and society, focusing on human-centred approaches to Responsible AI. Central to this theme is the importance of ensuring AI systems are trustworthy, inclusive, and designed with human needs in mind. For society to flourish it is imperative that we develop and evaluate interactive systems to ensure that they are usable, safe, secure and socially just, enhancing the quality of life of their users. Our work spans application areas like cybersecurity, health, and education, using AI-based tools such as recommender systems and chatbots. This theme promotes AI technologies that enhance societal well-being while ensuring security, fairness, and ethical integrity. Our work ranges from developing systems that facilitate positive interactions with technology to assessing potential harms and mitigating biases in data-driven systems.
People
Shauna Concannon
Assistant ProfessorProfessor Alexandra Cristea
Professor of Computer ScienceDr Sunčica Hadžidedić
Assistant ProfessorDr Stamos Katsigiannis
Assistant ProfessorProfessor Effie Lai-Chong Law
Professor in Human-Computer InteractionDr. Frederick Li
Associate ProfessorDr Jingyun Wang
Assistant ProfessorResearch Highlights
Measuring what matters: Scoping dimensions of measurement for responsible AI in the public sector
Shauna Concannon
AI is becoming increasingly common in the public sector, but assessing the costs and benefits of AI applications frequently relies on technical and efficiency measures that do little to capture public values or the complexity of public service delivery.
In collaboration with Sheffield University, The National Physical Laboratory and iNetwork, this RAI funded project establishes a new cross-sector collaboration to investigate what matters in measurement of responsible AI in the public sector. Through an analysis of current public sector AI case studies and co-productive workshops with stakeholders in central and local government, we will develop a foundational understanding of practices and priorities for responsible AI measurement among working public sector professionals.
This research is particularly timely given the recent announcement by the UK Government on the new Centre for AI Measurement, to be led by The National Physical Laboratory.
Image: Lone Thomasky & Bits & Bäume https://betterimagesofai.org https://creativecommons.org/licenses/by/4.0/
TechUP Brazil: Inclusive Digital Skills for Employability and Transnational Education
Alexandra I. Cristea
This 17-month project is the only one selected to collaborate with Brazil in this Call (max. 1 project per country). It leverages the UK TechUP model and experiences from the MESSENGER project (MESSENGER – AIHS) to co-develop a transnational education (TNE) initiative focused on employability and inclusion. The grant is with Universidade Federal do Amazonas – Distance Education Center (CED/UFAM, Brazil), Universidade do Estado de Santa Catarina (UDESC, Brazil), Universidade Federal Rural de Pernambuco (UFRP, Brazil). Objectives are: 1. Diversify UK TNE provision by addressing barriers to academic and research qualifications. 2. Enhance employability skills through digital and socio-emotional training. 3. Expand access to higher education for disadvantaged communities in Brazil, with emphasis on remote areas of the Amazon and southern regions.
Funder: British Council, PI: AI Cristea, CoI: S. Black, PM: J. Waite.
From the Classroom to the Cosmos: Expanding Educational Horizons with Cross-Reality Technology
Alexandra I. Cristea
Prior research has shown that, whilst online education is established, virtual reality (VR) education has potential, and STEAM education considered a progressive approach, there is and the SCENE Lab research on AI in VR at Durham (https://scene.webspace.durham.ac.uk/), our main research question is “What is the real-life potential of providing authentic cultural and learning experiences through STEAM education using virtual- (VR), augmented- (AR), extended- (XR), mixed- (MR) and cross-reality(CR) solutions (jointly called ‘∈CR’, i.e. subsets of CR) technology, what kind of skills can be nurtured, and what techniques, technologies and algorithms are needed?”. Following from this our primary objective is to propose, build and evaluate in real-life an art-based learning environment for virtual and extended (∈CR) reality for secondary and university education. The secondary objective is cross-cultural collaboration and personalised feedback to enhance interactions between Japanese & UK pupils and teachers.
Research methods include a participative design methodology, involving users (teachers and learners) in all aspects of our framework construction and system design. An example novel technology to use is digital twins – i.e. a virtual representation that serves as the real-time digital counterpart of a physical object or process, to simulate VR objects or students across the Japanese-UK virtual space. We are expanding or creating the software and algorithms necessary to support the educational ∈CR experience, including 3D visualisation, adaptive interaction with digital and recognised real objects in the environment, automatic translation and artificial Intelligence (AI)-based recommendations.
Funder: Royal Society; PI Japan: M. Kayama, Shinshu University; CoI: J. Wang, S. Concannon, C. Stewart; CoI Japan: S. Ogio, Univ. of Tokyo; T. Tomida, Shinshu University; T. Nagai, T. Tachi, Institute of Technologists
Empathetic interactions in human-machine communication
Shauna Concannon
Conversational AI systems (e.g. Virtual Personal Assistants, chatbots such as Replika and ChatGPT) are increasingly utilised in more socially complex contexts. Developing systems that can respond appropriately to emotional content and employ empathetic language more convincingly is an active research area. Assessing the way systems create perceptions of empathy brings together a range of technological, psychological, and ethical considerations that merit greater scrutiny. The ability to create the perception of empathy human-machine communication is achieved in large part through linguistic behaviour and yet existing approaches to evaluation play little attention to how language is used. In this project we examine the strategies used by conversational AI systems to create the perception of empathy using approaches from interactional linguistics and develop a framework for measuring perceived empathy in the context of human-machine communication.
Photo by Andy Kelly on Unsplash.
Conversational Agents for Older Adults (CA4OA)
Effie Lai-Chong Law
Trustworthy conversational agents (CAs) can help older adults (OAs) as a flexible means to access basic services such as online banking where the shift to CAs is fast. However, OAs tend not to adopt CAs due to the lack of trust and cannot benefit from them. Consequently, OAs may become digitally marginalised and excluded from digitalised society. It is timely to study with interdisciplinary approaches how design choices of CAs (modality, embodiment, anthropomorphism) and OAs’ mental models, attributes (e.g., gender) and conditions (e.g., loneliness) are related to trust in CAs. Design guidelines and prototypes for OA-specific CAs will be delivered. CA4OA is a research project funded by UKRI Trustworthy Autonomous Systems (TAS) Hub.