Skip to main content

Seminars

The AIHS seminar series usually takes place on Thursdays at 13:00. Join on zoom.

To sign up for the AIHS seminar mailing list please email [email protected]

For talks or any other questions, contact: [email protected]

Weekly Seminars

DateSpeakerTopicMore Detail
29/07/2022Oleg Sychev, Assoc. Prof., Volgograd State Technical UniversityGenerating Automatic Feedback for Online Learning: From Completion Hints to Explaining Domain Law Violations and Asking Questions
AbstractPerforming assessments and receiving feedback about their correctness and the errors made is an important part of the learning process. Practice with feedback converts knowledge into skills. Students who have trouble grasping material from texts can discover it from practice in a controlled environment if their errors are shown and explained to them. To achieve this effect, feedback must be informative: if a student gets stuck, they must be able to find their error and continue solving the exercise using the provided feedback. Automatic feedback on simple, routine tasks lets teachers concentrate on developing higher-level skills; it is critical for MOOCs where teachers’ time is not enough to give feedback to every student. So my talk will be about different kinds of feedback that can be calculated from the student’s answer. We started from simple string completion hints to short-text answer questions where answers were templated by regular expressions. The feedback is the correct beginning of the answer (up to the first wrong character), the next correct character, or the next correct word (token), leading to the nearest completion. This alone increased students’ engagement with e-learning tools, especially for the students who gained little from lectures and reading, sometimes leading to the “Eureka” effect when the student discovered the rules they couldn’t get from explanations. Our next model, the basis of CorrectWriting software, was used to teach syntax: writing correct tokens in the correct order. The feedback is error messages about misplaced, omitted, and extraneous tokens and hints on how to fix them the model relies on the teacher to provide the descriptions of tokens’ syntactic roles. CorrectWriting was used to teach programming languages and English natural language. Our new generation of intelligent formal models aims to automate developing comprehension of concepts when introducing students to new subject domains. They use simple exercises exposing the properties of the studied concepts, and, using closed-answer questions, are able to link every wrong step from a student to breaking some domain law, providing explainable feedback about errors as soon as they are made. Having detailed information about the laws student got right and the laws the student got broken repeatedly provides good grounds to study adaptation techniques, especially because the problem’s properties (the laws and concepts it requires to solve) can be formally deduced instead of relying on teacher’s expertise. These models can also solve tasks for the students, so only problem description (both in human-readable and formal forms) is needed. This makes creating worked examples (or demonstrating particularly hard steps) easy and opens the way to mine a large question base from existing open-source code. Mining and generating questions will allow solving another important problem, taking the burden of exercise creation from teachers and providing a practically unlimited supply of problems with any necessary properties. Another important feature of this approach is follow-up questions, trying to make the student think about their errors and determine the exact fault reason. We developed a set of follow-up questions for the experimental domain and are starting to develop methods of their automated generation. While these exercises are relatively small and easy for now, they leverage question generation and solving to a new level and later can be adapted to more complex tasks.
23/06/2022Prof Yulan He, University of WarwickTBC
AbstractTBC
16/06/2022Dr. Simone Stumpf, University of GlasgowMaking AI more understandable, interactive and human-centric
AbstractAlthough the interdisciplinary field of Music Information Retrieval (MIR) is relatively young, with the first international flagship conference (ISMIR) being held in 2000, the term itself in fact dates back to the 1960s (Kassler 1966). Yet, insofar as researchers retell the history of their discipline in the margins of their work, such distant starting points are frequently forgotten or overlooked. This talk draws on my doctoral research into the history of computational approaches to music to better understand the connections between MIR as it is practised today and the intellectual commitments of its early pioneers, including those who sought to design pre-digital information retrieval systems to organise the world’s recorded music. This talk comes at a moment in MIR, as in so many areas of applied computer science, when practitioners are exploring questions of fairness, ethics, and bias in the computational systems they describe in their research. Their work is likely to benefit from deeper connections with the history of computing, and the history of musicology. Therefore, I will briefly describe some general proposals for how relevant historical context can be supplied in the dissemination of new computational or “AI” models of cultural data, along the lines of the recently proposed Datasheets for Datsets (Gebru et al. 2020 [2018]) and Model Cards for Model Reporting (Mitchell et al. 2019).
09/06/2022Dr. Leandro Minku, University of BirminghamOvercoming the Challenge of Limited Labeled Data in Data Stream Learning
AbstractThe volume and incoming speed of data have increased tremendously over the past years. Data frequently arrive continuously over time in the form of streams, rather than forming a single static data set. Therefore, data stream learning, which is able to learn incoming data upon arrival, is an increasingly important approach to extract knowledge from data. Data stream learning is a challenging task, because the underlying probability distribution of the problem is typically not static, but suffers changes over time. Such challenge is exacerbated by the fact that, even though the rate of incoming examples may be very large, only a small portion of these examples may arrive as labeled examples for training due to the high cost of the labelling process. In this talk, I will discuss novel data stream learning approaches and research directions to tackle this and other challenges posed by real world applications.

Bio: Dr. Leandro L. Minku is an Associate Professor at the School of Computer Science, University of Birmingham (UK). Prior to that, he was a Lecturer in Computer Science at the University of Leicester (UK). He received the PhD degree in Computer Science from the University of Birmingham (UK) in 2010. Dr. Minku’s main research interests are machine learning in non-stationary environments / data stream mining, online class imbalance learning, ensembles of learning machines and computational intelligence for software engineering. Among other roles, Dr. Minku is Associate Editor-in-Chief for Neurocomputing, Senior Editor for IEEE Transactions on Neural Networks and Learning Systems, and Associate Editor for Empirical Software Engineering Journal and Journal of Systems and Software. He was also the General Chair for the International Conference on Predictive Models and Data Analytics in Software Engineering (PROMISE 2019 and 2020), and Co-chair for the Artifacts Evaluation Track of the International Conference on Software Engineering (ICSE 2020).
26/05/2022Eamonn Bell Looking back to move forward: the pre-histories of Music Information Retrieval
AbstractAlthough the interdisciplinary field of Music Information Retrieval (MIR) is relatively young, with the first international flagship conference (ISMIR) being held in 2000, the term itself in fact dates back to the 1960s (Kassler 1966). Yet, insofar as researchers retell the history of their discipline in the margins of their work, such distant starting points are frequently forgotten or overlooked. This talk draws on my doctoral research into the history of computational approaches to music to better understand the connections between MIR as it is practised today and the intellectual commitments of its early pioneers, including those who sought to design pre-digital information retrieval systems to organise the world’s recorded music. This talk comes at a moment in MIR, as in so many areas of applied computer science, when practitioners are exploring questions of fairness, ethics, and bias in the computational systems they describe in their research. Their work is likely to benefit from deeper connections with the history of computing, and the history of musicology. Therefore, I will briefly describe some general proposals for how relevant historical context can be supplied in the dissemination of new computational or “AI” models of cultural data, along the lines of the recently proposed Datasheets for Datsets (Gebru et al. 2020 [2018]) and Model Cards for Model Reporting (Mitchell et al. 2019).
12/05/2022Neil WalkinshawEmbracing Uncertainty when Reasoning about Software Behaviour
Abstract>Software systems are large, complex, and often evolve over decades, with contributions from hundreds or even thousands of people. Ultimately, when testing or analysing the behaviour of such systems, it is necessary to accommodate a high degree of uncertainty and doubt. This talk covers a programme of work spanning the past decade that has grappled with this problem, specifically within a testing context. In describing this work I will specifically cover three ideas and challenges: (1) the interplay between software testing and Machine Learning, (2) the use of uncertainty logics to explicitly capture and reason about uncertainty, and (3) the challenge of establishing whether an input-output relation is actually causal, or merely accidental.
05/05/2022Prof Jane Cleland-HuangTowards Human Machine Teaming in Emergency Response with small Unmanned Aerial Systems
AbstractThe use of autonomous small Unmanned Aerial Systems (sUAS) to support emergency response scenarios, such as fire surveillance and search and rescue, offers the potential for huge societal benefits. However, designing an effective solution in this complex domain represents a wicked design problem, requiring the system to carefully balance sUAS autonomy and human control. While traditional “Human-on-the-Loop” (HoTL) systems, support this to some extent, technological advances in autonomic computing, are enabling a more advanced form of collaboration, referred to as Human Machine Teaming (HMT). HMT emphasises autonomy of both the human and the machine through their interactions, partnership, and teamwork. As such, HMT capitalizes upon the respective strengths of both the human and the machine, whilst simultaneously compensating for each of their potential limitations. In this talk, Professor Cleland-Huang will first describe the DroneResponse System, under development at the University of Notre Dame, and will then discuss the open challenges and preliminary solutions that are being taken in order to transition DroneResponse from a HoTL to an HMT environment.
17/03/2022Professor Roger K. MooreEmbodied versus Disembodied Conversational Agents: Two communities, one agenda?(video)
AbstractRecent years have seen tremendous growth in the market for speech-based personal assistants (such as Siri and Alexa) and text-based chatbots. Both attempt to exploit human language to provide automated access to information and services via a disembodied agent. Meanwhile, there is growing interest in social robots – embodied agents that are designed to interact with users physically as well as via spoken language. This talk will highlight the similarities and differences between embodied and disembodied conversational agents, and identify the research themes that are common to both.
10/03/2022
03/03/2022Vincent CrosetSingle-cell transcriptomics uncovers a novel role for glia in thirst-directed behaviours (video)
AbstractThirst emerges from a range of cellular changes that ultimately motivate an animal to consume water. Although some thirst-responsive neuronal signals have been reported, the full complement of brain responses is unclear. Here we identify molecular and cellular adaptations in the brain using single-cell RNA-sequencing of water deprived Drosophila. Perhaps surprisingly, water deficiency primarily altered gene expression in glia, rather than neurons. I will describe the various analyses that enabled us to reach this conclusion, and explain some follow-up experiments that we performed to demonstrate that one of the glial genes regulated by thirst in astrocytes contributes to regulating water consumption via modulation of glutamatergic neurotransmission
24/02/2022
17/02/2022Dr. Patricia Muller, Biosciences, Durham University p53 and metals; p53 protein unfolding and selection of p53 mutations in cancer (video)  
10/02/2022Alan Dix, Professor and Director of the Computational Foundry, Swansea University Digital Thinking seeing the world with digital eyes
03/02/2022Ann Blandford FHEA, Professor of Human-Computer Interaction at University College London AI, computation and interaction: Explorations in healthcare
27/01/2022Seiji Isotani, Professor in Computer Science, Sao Paolo University, Brazil Towards the Design of a Public Policy to evaluate educational technologies using evidence and AI
20/01/2022 EPSRC New Horizons pitching to peers
13/01/2022Martin j. Cann, Professor in BioSciences, DurhamComputational approaches to understand the impact of carbon dioxide on biological systems
AbstractCarbon dioxide (CO2) is one of the most important gases on Earth and an absolute requirement for life. The planet faces long-term increases in atmospheric CO2, which is predicted to impact diverse ecological niches and their component organisms significantly. Therefore, understanding CO2 biology is of pressing strategic importance. We have knowledge of direct CO2 molecular targets in organisms from two sources. The first is experimental laboratory data, and the second is hypothesised targets from computational approaches. I will discuss the background of CO2 biology, and why we study it, I will explain how we identify CO2 targets and discuss our subsequent challenges around interpreting this data.
09/12/2021Bobby Lee Townsend Sturm JR, Associate Professor of Computer Science at KTHTraditional Machine Music Learning
AbstractBriefly, I will summarise and demonstrate my work applying machine learning to traditional music, and traditional music to machine learning, in order to create and study traditional machine learning music from the perspectives of traditional music, machine learning, traditional music machine learning, and finally learning traditional machine music. Not so briefly: In the frontiers of artificial creativity, machines are acting more as partners than tools in creative practice. Instead of, or in addition to, expediting and automating mundane tasks, artificial intelligence (Ai) is now able to give suggestions from the tiniest details to the big picture, working together with a human artist to form a complete artistic experience. I have been exploring these frontiers for several years in my own music practice using systems built from applying machine learning to traditional dance music. This has uncovered a variety of interesting frictions with regards to assumed divine origins of creativity, the importance of authenticity, and of course human redundancy. My talk will give an overview of my poetic and scientific research, and discuss some of these significant issues, complete with musical examples. My talk is part of the MUSAiC project: “Music at the Frontiers of Artificial Creativity and Criticism” (https://musaiclab.wordpress.com/).
02/12/2021Dr. Wayne Holmes, UCL
25/11/2021Santawat Thanyadit, postdoctoral research associate in computer science and human-computer interaction at the Department of Computer Science, Durham UniversityMiniTutor: An Adaptive Avatar MR System to Improve Students’ Attention
18/11/2021Mohammad MousaviModel learning for evolving systems
Abstract

Model learning is a technique to learn behavioural models based on theory of automata (finite state machines) by interacting with black-box systems. Variability and evolution are inherent to much of the modern systems and hence, new sorts of model learning techniques are needed to learn about variability-intensive and evolving systems. In this talk, we first present the basic principles of automata learning and then report on two novel techniques for learning variability-annotated models as well as efficient learning for evolving systems by identifying the commonalities and differences in the learning process.

11/11/2021Tim MenziesThe Bleeding Edge (the next 10 years of SE analytics)
Abstract

“Are you up to date? Are you aware of the technologies that will define the next decade of analytics research? What are the strengths and weaknesses of that next-gen tech? How can we adjust that tech to take into accounts concerns about “”FAT”” (fairness, accountability, trust)? How to teach that diverse range of material? How should you be adjusting your graduate curriculum to cover those topics? Where is deep learning in all that? And beyond the next decade, what happens then? Will this be an interesting talk? Can you skim these slides beforehand (http://tiny.cc/aug21)? Can you come along with hard questions? Will I be able to answer those questions? For answers to all these questions, and more, please come to my talk. ABSTRACT: Are you up to date? Are you aware of the technologies that will define the next decade of analytics research? What are the strengths and weaknesses of that next-gen tech? How can we adjust that tech to take into accounts concerns about “”FAT”” (fairness, accountability, trust)? How to teach that diverse range of material? How should you be adjusting your graduate curriculum to cover those topics? Where is deep learning in all that? And beyond the next decade, what happens then? Will this be an interesting talk? Can you skim these slides beforehand (http://tiny.cc/aug21)? Can you come along with hard questions? Will I be able to answer those questions? For answers to all these questions, and more, please come to my talk. “

28/10/2021Lieck RobertRecursive Bayesian Networks: Generalising and Unifying Probabilistic Context-Free Grammars and Dynamic Bayesian Networks
Abstract

Recursive Bayesian Networks: Generalising and Unifying Probabilistic Context-Free Grammars and Dynamic Bayesian Networks

14/10/2021Simon Woodhead Eedi and the NeurIPS Education Challenge Dataset
Abstract

Abstract: In 2020 the most popular competition at NeurIPS was the education challenge. We describe the data generating process, the competition, and the dataset which is freely available for research.

24/06/2021Lenka SchnaubertCognitive awareness in digital and social learning environments
Abstract

Digital and social learning environments provide various opportunities to learn and interact with the learning content, teachers and peers. However, regulating social and content-related activities within such environments may provide specific challenges for learners. During self-regulated learning, learners need to dynamically adapt their learning and interaction processes to relevant internal and external conditions. These include their own and their peers’ knowledge and cognitions. For example, perceived gaps in knowledge may require allocation of additional study time while specific knowledge distributions within a group may warrant particular collaboration strategies. However, forming and maintaining awareness regarding these (meta- and socio-) cognitive conditions and deducing appropriate learning activities can be hard to accomplish without support. Building on research from metacognitive self-regulation and computer-supported collaborative learning, this talk explores the role of meta- and socio-cognitive awareness in digital and social learning environments and also introduces self- and group awareness tools as a means to guide individual and collaborative learning processes.

10/06/2021Jim Ridgway, School of Education, Durham UniversityFiring up the Epistemological Engine
Abstract

” Jim Ridgway, School of Education Conceptions of knowledge, ways of knowing, and uses for knowledge, are in a state of flux. Examples include: the reproducibility crisis in psychology (and associated p-value hysteria), and Anderson’s Death of Theory in Wired. Babbage and Lovelace worked on the Difference Engine and the Analytical Engine; we now need an Epistemological Engine. We (Alexandra Cristea, Craig Stewart and Jim) have some IAS funding to begin work on the Durham Epistemological Engine, whose goal is to create and use tools to engage with, and shape, the evolving knowledge landscape. A starting point is to review methods to automate the processes involved in a literature review – in particular to support meta-analysis by using computer science tools such as natural language processing, deep learning and AI. Other activities could include: developing tools to identify areas where the research methods are poor (e.g. using small samples to detect weak effect sizes; failure to share data or code [c.f. claims that commercial data science is like alchemy not chemistry]); creating tool-to-problem maps.”

20/05/2021Filipe Dwan Pereira, postdoctoral researcher, Computer Science, DurhamTitle: Towards AI-human hybrid online judges to support decision making for CS1 stakeholders
Abstract

Abstract: Introductory programming (also known as CS1 – Computer Science 1) may be complex for many students. Moreover, there are a high failure and dropout rates in these courses. A common agreement from computing education research (CEdR) is that programming students need practice and quick feedback on the correctness of their code. Nonetheless, CS1 classes are usually large with high heterogeneity of students which make individualised support almost impractical. As an alternative to improve and optimise the learning process, researchers indicate a system that automatically evaluates students’ codes, also called online judge. These systems provide the assignments created by the instructors and an integrated development environment, where the student can develop and submit the solutions to problems and receive immediate feedback, that is, if the code developed as a solution for a given problem is right or wrong. Additionally, these online judge systems have opened up new research opportunities since it is possible to embed in these systems software components capable of monitoring and recording fine-grained actions performed by the students during their attempts to solve the programming assignments. Research in the areas of Intelligent Tutoring Systems, Adaptive Educational Hypermedia and AI in Education have shown that personalisation using data-driven analysis is essential to improve the learning process and can be useful to provide individualised support for stakeholders (students, instructors, etc.). In this sense, in this work we collected logs of the interaction of the students within an online judge, recording very fine-grained data, such as keystroke, number of commands typed, number of submissions, etc., making it possible to do research of great precision into the exact triggers for students’ progress. Furthermore, we extract useful information from the program statements using Natural Language Processing (NLP). Using such data we performed descriptive, predictive and prescriptive analysis to propose and validate methods that combines the large-scale approach formula for generalities with the flexibility given by an in-house online judge system, allowing unprecedented research depth and amenability to provide individualised support for stakeholders. Indeed, our methods have the potential of improving the students learning by stimulating effective practice at the same time that reducing the instructors’ workload and, hence, giving a move towards novel human/AI online judge systems to support CS1 classes. Our partial results include the following: i) a cutting edge interpretable machine learning model that predicts the learners’ performance and explains individually and collectively factors that lead to failure or success; (ii) a model that, for the first time, to the best of our knowledge, detects early effective programming behaviours and indicates how those positive behaviours can be used to guide students with ineffective behaviours; iii) a novel prescriptive model that automatically detects the topic and subject matter of problems achieving state-of-the-art results and makes recommendations based on that and fine-grained effective behaviours.

06/05/2021Elad Yacobson, Weizmann Institute, IsraelEncouraging Teacher-sourcing of Social Recommendations Through Participatory Gamification Design
Abstract

Teachers and learners who search for learning materials in open educational resources (OER) repositories greatly benefit from feedback and reviews left by peers who have activated these resources in their class. Such feedback can also fuel social-based ranking algorithms and recommendation systems. However, while educational users appreciate the recommendations made by other teachers, they are not highly motivated to provide such feedback by themselves. This situation is common in many consumer applications that rely on users’ opinions for personalisation. A possible solution that was successfully applied in several other domains to incentivise active participation is gamification. This paper describes for the first time the application of a comprehensive cuttingedge gamification taxonomy, in a user-centred participatory-design process of a OER system for Physics, PeTeL, used throughout Israel. Physics teachers were first involved in designing gamification features based on their preferences, helping shape the gamification mechanisms likely to enhance their motivation to provide reviews. The results informed directly the implementation of two gamification elements that were implemented in the learning environment, with a second experiment evaluating their actual effect on teachers’ behaviour. After a long-term, real-life pilot of two months, teachers’ response rate was measured and compared to the prior state. The results showed a statistically significant effect, with 4X increase in the total amount of recommendations per month, even when taking into account the ’Covid-pandemic effect’.

25/02/2021Dr. Giora Alexandron, Weizmann Institute, IsraelWhat does your digital footprint say about you? (Machine Learning to understand Human Learning)
Abstract

The high-resolution learner activity data that modern online learning environments collect provide a unique opportunity to study ways to improve the pedagogy of interactive learning environments. Applications include adaptive learning algorithms that can provide timely feedback and recommend activities to individual students based on their preferences and state of knowledge; and learning analytics that provide actionable insights about students learning, and can be used to improve instruction, optimize the pedagogic design, and identify students at risk, among other things. My research centers on these directions in MOOCs and K12 learning environments. In the talk, I will present results from studies that use machine learning and data mining methods to study the behavior of learners, with the goal of optimizing the design of learning environments, adapting to the needs of individual learners, and detecting and preventing cheating in Massive Open Online Courses.

18/02/2021Prof Tanja Mitrovic, Professor of Computer Science, Cantenbury University, New ZealandInvestigating the effect of voluntary use of an intelligent tutoring system on students’ learning
Abstract

Numerous controlled studies prove the effectiveness of Intelligent Tutoring Systems (ITSs). But what happens when ITSs are available to students for voluntary practice? EER-Tutor is a mature ITS which was previously found effective in controlled experiments. Students can use EER-Tutor for tutored problem solving, and there is also a special mode allowing students to develop solutions for the course assignment without receiving feedback. We observed two classes of university students using EER-Tutor. In 2018, the system was available for completely voluntary practice. We hypothesized that the students’ pre-existing knowledge and the time spent in EER-Tutor, mediated by the number of attempted problems, contribute to the students’ scores on the assignment. All but one student used EER-Tutor to draw their assignment solutions, and 77% also used it for tutored problem solving. All our hypotheses were confirmed. Given the identified benefits of tutored problem solving, we modified the assignment for the 2019 class so that the first part required students to solve three problems in EER-Tutor (without feedback), while the second part was a bigger, open-ended problem. Our hypothesized model fits the data well and shows the positive relationship between the three set problems on the overall system use, and the assignment scores. In 2019, 98% of the class engaged in tutored problem solving. The 2019 class also spent significantly more time in the ITS, solved significantly more problems and achieved higher scores on the assignment.

11/02/2021Noor Hasimah Ibrahim TeoOntological Approach for Automatic Question Generation
Abstract

A question is a powerful tool that is widely used in everyday activities for a different context. In education, a question may help in knowledge construction, increase self-understanding, and help instructors validate students’ conceptual understanding. Manual construction of good questions is a complex task that requires knowledge, right resources, and experience. The complex process of knowledge construction may slow down the activity for generating assessment questions. Therefore, Automatic Question Generation strategies were introduced. This work has exploited the emergence of Semantic Web technologies specifically ontologies to gauge related knowledge for question construction. Most of the works on ontology question generation have applied similar ontological approaches for generating questions, focusing mainly on generating distractors for MCQ (multiple choice) types of questions. In contrast, this research will explore factual question generation for short/long answer types of questions which aim to produce novel strategies with corresponding question templates to generate factual short/long answer questions from an ontology. Three question generation strategies based on ontological approach were proposed for educational assessment questions. 

26/11/2020Konstantinos NikolopoulosForecasting for big data: Does suboptimality matter?
Abstract

raditionally, forecasters focus on the development algorithms to identify optimal models and sets of parameters, optimal in the sense of within-sample fitting. However, this quest strongly assumes that optimally set parameters will also give the best extrapolations. The problem becomes even more pertinent when we consider the vast volumes of data to be forecast in the big data era. In this paper, we argue if this obsession to optimality always bares the respective fruits or do we spend too much time and effort in the pursuit of it. Could we better off by targeting for faster and robust systems that would aim for suboptimal forecasting solutions which, in turn, would not jeopardise the efficiency of the systems under use? This study throws light to that end by means of an empirical investigation. We show the trade-off between optimal versus suboptimal solutions in terms of forecasting performance versus computational cost. We further discuss the implications of suboptimality and attempt to quantify the monetary savings as a result of suboptimal solutions. We finally point out avenues for future research in the field.

19/11/2020Daniela Romano, UCL; now Professor at De Montfort UniversityFrom Bio to BCI: extending human physical and mental capabilities
Abstract

“Imagine a world where we can think about moving a robotic arm and it happens. Play a video game by thinking about moving our arm or leg (without actually doing it). A pair of shoes that adapts to your fatigue. A situation in which you feel exhausted and your synthetic companion takes over the task following your commands. How close are we to use our bodily signals as a main stream interface? Science fiction or Virtual Reality? Or just a new but real possibility to interact with our digital world? Daniela will present how studying the human, and with the help of technology and AI, we are able to enhance and extend the human capacities. We are a step closer to the bionic human, where biological and machine-made elements can provide additional powers to our capacities. Implications on our cognitive capacities and are behaviour will be also discussed. Daniela Romano, UCL Human Informatics (UCLHI), Department of information Studies at UCL, conducts research at the junction between the human psychology, VR technology and intelligent algorithms; and the fruitful interactions amongst them. Daniela has a MSc/BSc in Computer Science with Maths, a PhD in 2002 investigating how to support naturalistic decisions making with an Intelligent serious game at Leeds. Lecturer (2004); Senior Lecturer (2010) at Sheffield Computer Science, Professor (2015) at Edge Hill Computer Science, has joined UCL as part of the teaching staff in 2017, where she has created the UCLHI an interest group with Brain Science and Psychology, where researchers work at the convergence of multiple disciplines that are concerned with both the understanding and modelling human activities with the design of information systems and technologies.”

19/11/2020Kiran J. FernandesUsing Gamification to create value in Business Models
Abstract

Gamification is defined as using the motivational potential of digital game design to non-game environments. In this presentation, we specifically apply this concept to create and capture the value of a firm. We will discuss how firms create value using a novel model called Stress-Disturbance based on the Value Chain Theory. The main objective of the presentation is to provide colleagues in Computer Science an idea about the type of research being pursued in the area of Gamification (as part of an EPSRC project called DC Labs). In this presentation, I will introduce a new method of explaining how firms create value and the role gamification can play in helping firms become more innovative.

12/11/2020Timofeeva, Yulia, Professor of Computer Science, Warwick UniversityComputational modelling of neurotransmitter release

Abstract

“Abstract: Increases in concentration of free calcium ions (Ca2+) in presynaptic terminals of neuronal cells trigger vesicular release of neurotransmitters. Changes in Ca2+ concentration are primarily due to Ca2+ influx through voltage-gated calcium channels located at the plasma membrane and activated during an action potential or spontaneously. In this talk I will demonstrate how experimentally constrained computational modelling of underlying biological processes can complement laboratory studies (using electrophysiology and imaging techniques) and provide insights into the mechanisms of synaptic transmission.”

05/11/2020Dr. Lada Timotijevic, Surrey UniversityDeveloping responsible governance for an e-infrastructure: the case study of the Determinants and Intake “Richfields” Data Platform

Abstract

Big data provides immense opportunities to radically alter the way in which science is done, fostering cross-fertilisation between disciplines and providing connectivity between disparate data-sets. Distributed computing infrastructures – commonly known as e-infrastructures -have been created that provide researchers shared access to large data collections enabled through advanced ICT tools for data analysis, large-scale computing resources, and high-performance visualisation. However, the sheer scale of big data that poses ethical, legal and societal challenges to e-infrastructures. Using a case study as a practical example we explore how responsible governance is being forged in the context of the development of a public health e-infrastructure: Determinants and Intake (DI) “Richfields” Data Platform is an international e-infrastructure developed to connect data, services and tools for the study of human food related behaviour. The paper will pinpoint the shortages of the current legal framework in the EU applied to the governance of public health nutrition e-infrastructure

21/10/2020Dr. Susanne Lajoie, Department of
Educational & Counselling Psychology, McGill University, Canada
Application of Cognitive Theories to the Design of Advanced Technologies for Learning.

Abstract

Psychological theories can inform the design of technology rich learning environments (TREs) to provide better learning and training opportunities. Research shows that learners do better when interacting with material that is situated in meaningful, authentic contexts. Recently, psychologists are interested in the role that emotion plays in learning with technology. Lajoie investigates the situations under which technology works best to facilitate learning and performance by examining the relations between cognition (problem solving, decision making), metacognition (self-regulation) and affect (emotion, beliefs, attitudes, interests, etc.). Convergent methodologies will be described (i.e., physiological and behavioral indices, think aloud protocols, eye tracking, etc.) in terms of how they are used to identify how learners think and feel in the context of TREs. TREs can include simulations, intelligent tutoring systems, agent-based systems, augmented reality systems, and serious games. Examples will be presented of how TREs can determine when learners are engaged and happy as opposed to bored and angry while learning. Findings from this type of research helps identify the best way to tailor the learning experience to the cognitive and affective needs of the learner. Furthermore, social and emotional competencies of learning in teams in the context of technology rich learning environments (TREs) will be discussed as they pertain to: (a) the ability to adapt to new situations and challenges and engage in complex problem solving; (b) social skills necessary for communicating and collaborating productively and proficiently; (c) social-emotional skills and empathy necessary for tackling challenging problems and regulating emotion, and (d) ability to take initiative, set goals, and monitor self and others.

08/10/2020James SprittlesComputational Modelling of Free Surface Nanoflows
Abstract

“Understanding the behaviour of liquid-gas interfaces at the micro and nano scale is key to a myriad of phenomena, ranging from the formation of clouds through to 3D printing. Accurate experimental observation of such phenomena is complex due to the small spatio-temporal scales of interest and, consequently, mathematical modelling and computational simulation become key tools with which to probe such flows. As the characteristic scales of interest become comparable to microscopic scales, for a gas the mean free path and for a liquid the molecular diameter, the basic Navier-Stokes-Fourier (NSF) paradigm no longer provides an accurate description of the flow physics. However, microscopic models such as the kinetic theory of gases or molecular dynamics (MD) of liquids become computationally intractable for many flows of practical interest. The majority of my talk will consider the influence of thermal fluctuations, which are seen to be key to understanding counter-intuitive phenomena in nanoscale interfacial flows. A `top down’ framework that incorporates thermal noise is provided by fluctuating hydrodynamics and in this talk we shall use this model to gain insight into free surface nanoflows such as drop coalescence, jet breakup and thin film rupture, using MD as a benchmark. If time permits, I will overview our work on capturing gas kinetic effects to predict the outcome of collision events in which gas nanofilms govern flow behaviour on much larger (mm) scales. Specifically, I will consider the impact of liquid drops on solid surfaces, where we can compare computational models to recent experimental data. “

16/07/2020Yonina Eldar
From compressed sensing to deep learning: tasks, structures, and models
Abstract

“The famous Shannon-Nyquist theorem has become a landmark in the development of digital signal and image processing. However, in many modern applications, the signal bandwidths have increased tremendously, while the acquisition capabilities have not scaled sufficiently fast. Consequently, conversion to digital has become a serious bottleneck. Furthermore, the resulting digital data requires storage, communication and processing at very high rates which is computationally expensive and requires large amounts of power. In the context of medical imaging sampling at high rates often translates to high radiation dosages, increased scanning times, bulky medical devices, and limited resolution. In this talk, we present a framework for sampling and processing a large class of wideband analog signals at rates far below Nyquist in space, time and frequency, which allows to dramatically reduce the number of antennas, sampling rates and band occupancy. Our framework relies on exploiting signal structure and the processing task. We consider applications of these concepts to a variety of problems in communications, radar and ultrasound imaging and show several demos of real-time sub-Nyquist prototypes including a wireless ultrasound probe, sub-Nyquist MIMO radar, super-resolution in microscopy and ultrasound, cognitive radio, and joint radar and communication systems. We then discuss how the ideas of exploiting the task, structure and model can be used to develop interpretable model-based deep learning methods that can adapt to existing structure and are trained from small amounts of data. These networks achieve a more favorable trade-off between increase in parameters and data and improvement in performance, while remaining interpretable.”

09/07/2020Jennifer BadhamUsing agent-based modelling for COVID-19 social interventions
Abstract

Agent-based models represent the system being modelled in a specific way. In this seminar, I will introduce agent-based modelling and present a model of social interventions to manage the COVID-19 epidemic. In doing so, I will demonstrate some ways in which the agent-centric perspective is particularly useful for policy planning.

02/07/2020Tingwei Chen, visiting Professor, ChinaTime-Aware Attention Based Deep Neural Networks Model for Sequential Recommendation
Abstract

“Recommendation systems aim to assist users to discover most preferred contents from an ever-growing corpus of items. Although recommenders have been greatly improved by deep learning, it still faces several challenges: (1) Behaviors are much more complex than words in NLP, so traditional attentive and recurrent models may fail in capturing the temporal dynamics of user preferences. (2) The preferences of users are multiple and dynamic, so it is difficult to integrate long-term preference and short-term intent. In order to solve those problems, we propose two new models, named Dynamic Memory Preference Network (TPCF) and Multi-hop Time-aware Attentive Memory network (MTAM) separately. These two models can make good use of the temporal information in the sequence data, thereby improving the recommendation effect. Experimental results demonstrate the effectiveness of our Models.”

25/06/2020Julita Vassileva, Professor in Computer Science, University of Saskatchetwan, Canada
Persuasive Technologies for Behaviour Change: Personalization and Ethical Issues
Abstract

“Persuasive Technologies (PT) apply strategies for influencing people’s choices and behaviours based on research in social psychology and behavioral economics; they include game mechanics and elements, nudges, argumentations to steer behaviors in a desired way. This talk will present work in the MADMUC Lab at the University of Saskatchewan on personalizing PT to amplify their effect, specifically, with respect to personality and culture dimensions, and will discuss some of the ethical issues for designers of personalized PT. Persuasive Technologies (PT) apply strategies for influencing people’s choices and behaviours based on research in social psychology and behavioral economics; they include game mechanics and elements, nudges, argumentations to steer behaviors in a desired way. This talk will present work in the MADMUC Lab at the University of Saskatchewan on personalizing PT to amplify their effect, specifically, with respect to personality and culture dimensions, and will discuss some of the ethical issues for designers of personalized PT.”

23/04/2020Jan Deckers, NewcastleShould we develop AI systems to enhance human morality?
Abstract

As the use of genetic biotechnology to make human beings more moral is dangerous, the question must be asked whether AI could be used to achieve the same goal. Dietrich has proposed a replacement project of AI enhancement, suggesting that robots should eventually replace human beings as they would not suffer from our moral defects [Dietrich, E. (2001). Homo sapiens 2.0: Why we should build the better robots of our nature. Journal of Experimental & Theoretical Artificial Intelligence, 13(4), 323-8]. This is problematic, and should be rejected. The same applies to the exhaustive project of AI enhancement, as proposed for example by Gips [1995, Towards the Ethical Robot. In K.M. Ford, C. Glymour, & P. Hayes (Eds.), Android Epistemology (pp. 243-252). Cambridge: MIT Press] and by Lovelock [2019, Novacene: The Coming Age Of Hyperintelligence, Allen Lane].Whereas the exhaustive project does not seek human replacement, it proposes that the AI system should nevertheless exert great control over our behaviour. The exhaustive project has at least five problems. The first is that it may be hard to set this up because of the existence of value pluralism. The second is that it may be dangerous to subject our decisions to a system that may not be programmed correctly or fail to work correctly. The third is that no computer system could ever possess moral agency. The fourth is that it excludes moral progress. The final problem is that it would not turn us into more moral beings.Others have perceived similar problems with the exhaustive project and proposed auxiliary forms of AI enhancement. I engage with some of these and propose an alternative, a Socratic form of AI enhancement. It is developed more fully in a recently published paper, co-authored with Francisco Lara [Lara F, Deckers J. Artificial Intelligence as a Socratic Assistant for Moral Enhancement. Neuroethics 2019, https://doi.org/10.1007/s12152-019-09401-y].

02/04/2020 Joseph P. Bullock Using neural networks for high dimensional function interpolation and extrapolation
Abstract

In many physical and theoretical calculations, multivariable functions, with non-trivial divergent structures, must be integrated through e.g. Monte Carlo integration, which can become extremely computationally expensive. In such situations we must be careful when calculating and interpreting uncertainties in our procedures so as to understand the limits of our calculation. It is well known that neural networks have the ability to approximate any arbitrary continuous, non-linear function provided sufficient data and trainable parameters. Given the low inference time of a neural network, if we are able to train such a model to act as an approximation to these highly complex functions, while requiring only a fraction of the data required for a full evaluation, then we have the potential to save orders of magnitude of compute time. However, although there is great potential in this idea, we must also understand the limitations of a neural network approximation, including a careful and rigorous understanding of its associated uncertainties. In this talk I will discuss the use of neural networks as high-dimensional, multivariable interpolation functions, highlighting their power as well as pitfalls. In particular I will give an introduction as to how these methods can be applied in the field of particle physics, in which we perform high-precision calculations for use in stimulating processes at CERN’s Large Hadron Collider, where the largest bottleneck to advancement is the computing time required for such calculations. This talk will focus on recent work published here: https://arxiv.org/abs/2002.07516

26/03/2020Christian RichardtTowards reconstructing and editing the visual world

Abstract

“Abstract:I will be talking about some of my recent work on reconstructing and editing the visual world. My main research focus is to accurately and comprehensively capture real-world environments, including all visual dynamics such as people and moving animals or plants. My goal is to reproduce the captured environments and their dynamics in VR with photographic realism, correct motion parallax and overall depth perception. I will present our work Parallax360 (TVCG 2018), MegaParallax (TVCG 2019) and live intrinsic material estimation (CVPR 2018) that take steps towards this goal. In parallel to reconstructing the visual world, I am also interested in editing it in various ways. To this end, I will be presenting our work on HoloGAN (ICCV 2019), Deep Video Portraits (SIGGRAPH 2018) and Neural Style-Preserving Visual Dubbing (SIGGRAPH Asia 2019). “

12/03/2020Hubert Shum, Northumbria UniversityHuman Movement Understanding for Visual Computing
Abstract

Due to the advancement in human motion/surface capturing technology and the availability of public databases, human movement understanding has become a core component of many research problems in multiple research domains. In computer graphics, a good representation of human movement facilitates characters with realistic character animations and high-level crowd controls. In computer vision, modelling human movement is a key process for effective action classification and re-identification. In motion analysis, key features extracted from human movement enable effective motion-based human-computer interactions and gait analysis. The core problem here is to model human movement in a meaningful way, such that we can generalize knowledge to perform synthesis, recognition and analysis.In this talk, I will discuss the importance of human movement understanding in computer science. With the results of my research projects, I will demonstrate how it connects different research fields. I will show how my projects achieve impact in research and the society, and conclude my presentation with future opportunities and potential directions.

10/03/2020Harald KoestlerWhole-program Code Generation within ExaStencils
Abstract

“This work presents the ExaStencils code generation pipeline aimed at the development of CFD applications or ocean flow simulations by solving the shallow-water equations (SWE). Supported discretization schemes like finite differences, finite volumes, or quadrature-free discontinuous Galerkin (DG) formulations are discussed. The latter is mapped to our Python front end GHODDESS (generation of higher-order discretizations deployed as ExaSlang specifications). Inside this module, particulars about the algorithm can also be specified, such as the employed time-stepping scheme. After some processing steps, the complete specification is emitted as code in the domain-specific language (DSL) ExaSlang which, in turn, is fed into the ExaStencils code generation framework. This allows applying domain-specific transformations as well as mapping to different hardware platforms. Parallelization is also done automatically via mapping to MPI, OpenMP, and CUDA. Finally, a stand-alone C++ application code is emitted. At run-time, the generated simulation reads in block-structured grids that promise performance benefits over unstructured grids while still being flexible enough to capture more complex geometries sufficiently. We show scaling results on CPU and GPU clusters for several use cases of generated simulation codes.”

27/02/2020Eike Mueller, BathFast semi-implicit DG solvers for fluid dynamics: hybridization and multigrid preconditioners
Abstract

“For problems in Numerical Weather Prediction, time to solution is critical. Semi-implicit time-stepping methods can speed up geophysical fluid dynamics simulations by taking larger model time-steps than explicit methods. This is possible since semi-implicit integrators treat the fast (but physically less important) waves implicitly. As a consequence, the time-step size is not restricted by an overly tight CFL condition. A disadvantage of this approach is that a large system of equations has to be solved repeatedly at every time step. However, using an suitably preconditioned iterative method significantly reduces the computational cost of this solve, potentially making a semi- implicit scheme faster overall. A good spatial discretisation is equally important. Higher-order Discontinuous Galerkin (DG) methods are known for having high arithmetic intensity and can be parallelised very efficiently, which makes them well suited for modern HPC hardware. Unfortunately, the arising linear system in semi-implicit timestepping is difficult to precondition since the numerical flux introduces off-diagonal artificial diffusion terms. Those terms prevent the traditional reduction to a Schur-complement pressure equation. This issue can be avoided by using a hybridised DG discretisation, which introduces additional flux-unknowns on the facets of the grid and results in a sparse elliptic Schur-complement problem. Recently Kang, Giraldo and Bui-Thanh [1] solved the resultant linear system with a direct method. However, since the cost grows with the third power of the number of unknowns, this becomes impractical for high resolution simulations. We show how this issue can be overcome by constructing a non-nested geometric multigrid preconditioner similar to [2] instead. We demonstrate the effectiveness of the multigrid method for the non- linear shallow water equations, an important model system in geophysical fluid dynamics. With our solvers semi-implicit IMEX time- steppers become competitive with standard explicit Runge Kutta methods. Hybridisation and reduction to the Schur-complement system is implemented in the Slate language [3], which is part of the Firedrake Python framework for solving finite element problems via code generation. [1] Kang, Giraldo, Bui-Thanh (2019): “”IMEX HDG-DG: a coupled implicit hybridized discontinuous Galerkin (HDG) and explicit discontinuous Galerkin (DG) approach for shallow water systems”” Journal of Computational Physics, 109010, arXiv:1711.02751 [2] Cockburn, Dubois, Gopalakrishnan, Tan (2014): “”Multigrid for an HDG method””, IMA Journal of Numerical Analysis 34(4):1386-1425 [3] Gibson, Mitchell, Ham, Cotter, (2018): “”A domain-specific language for the hybridization and static condensation of finite element methods.”” arXiv preprint arXiv:1802.00303.”

13/02/2020Marek Tokarski, DurhamEnabling Student Enterprise, Innovation and Creativity
Abstract

“This talk will give an overview of key opportunities provided by Careers & Enterprise to develop student skills and mindsets in the areas of enterprise, innovation and creativity. Support is provided for both curricular and extra-curricular activity, including: – PGR skills workshops and events – Funding opportunities to increase the provision of enterprise education on a curricular or co-curricular basis – A new enterprise space which will be located on the ground floor of the new Maths and Computer Science Building”

16/01/2020 Joe Bullock, Dept PhysicsMapping Risks and Biases in AI Systems onto Human-Level Harms
Abstract

The use of AI across academia, industry and the public sector is widespread. Inn addition, access to the methods and technology to build and deploy AI systems is becoming increasingly democratised through open-source software, publications, and online tutorials as well as cheapening compute resources. Although this ease of access is positive, allowing more people to leverage the power of AI, education in understanding the potential negative consequences of its use can be lacking, both at the development as well as regulatory level. Furthermore, biases can manifest themselves at all stages in the development and deployment pipeline of AI models, from the project formulation stage to the dataset creation and the visualisation of results. In this talk I will discuss work done in collaboration with the United Nations in which we specifically highlight the risks surrounding the ability to generate text using language models and the potential implications for political stability. Additionally, I will discuss further work the UN is conducting in mapping such biases in AI systems and understanding how they can translate into human level harms. I will close with a collection of broad recommendations going forward for addressing the biases and mitigating the associated harms.

05/12/2019Gregoire Payen-de-la-GaranderWorkshop on using massive data on the NCC NVidia cluster.
Abstract

“Focus on: – how to parallelise via your program – how to allocate the tasks well across the GPUs and balance them well – how to be able to use the clusters efficiently and solve the ‘big’ problems quickly.”

07/11/2019Andreas Vlachidis, UCLUsing dates as contextual information for personalized cultural heritage experiences
Abstract

“Semantics can be engaged to promote reflection on cultural heritage by means of dates (historical events or annual commemorations), owing to their connections to a collection of items and to the visitors’ interests. Such links to specific dates can trigger curiosity, increase retention, and guide visitors around the venue following new appealing narratives in subsequent visits. The proposal has been explored and evaluated on the collection of the Archaeological Museum of Tripoli (Greece), for which a team of humanities experts wrote a set of diverse narratives about the exhibits. A year-round calendar was crafted so that certain narratives would be more or less relevant on any given day. Expanding on this calendar, personalised recommendations can be made by sorting out those relevant narratives according to personal events and interests recorded in the profiles of the target users. Evaluation of the associations by experts and potential museum visitors shows that the proposed approach can discover meaningful connections, while many others that are more incidental can still contribute to the intended cognitive phenomena. “

24/10/2019Charles MurrayLazy Stencil Integration in multigrid algorithms
Abstract

Multigrid algorithms are among the most efficient solvers for elliptic partial differential equations. They are known to solve partial differential equations, e.g. the poisson equation and related forms, in an optimal number of compute steps. However, we have to invest into an expensive matrix setup phase before we kick off the actual solve. This assembly effort is non-negligible and can delay the time to solution; particularly if the fine grid stencil integration is laborious. We propose to start multigrid solves with very inaccurate, geometric fine grid stencils which are then updated and improved in parallel to the actual solve. This update can be realised greedily and adaptively. We furthermore propose that any operator update propagates at most one level at a time, which ensures that multiscale information propagation does not hold back the actual solve. The increased asynchronity, i.e. the lazyness, improves the runtime without a loss of stability if we make the grid update sequence take into account that multiscale operator information propagates at finite speed.

10/10/2019Dave Braines, IBMConversational Explanations – Explainable AI through human-machine conversation
Abstract

“Explainable AI has significant focus within both the research community and the popular press. The tantalizing potential of artificial intelligence solutions may be undermined if the machine processes which produce these results are black boxes that are unable to offer any insight or explanation into the results, the processing, or the training data on which they are based. The ability to provide explanations can help to build user confidence, rapidly indicate the need for correction or retraining, as well as provide initial steps towards the mitigation of issues such as adversarial attacks, or allegations of bias. In this talk I will described the space of Explainable AI with a particular focus on the role of the human users within the human-machine hybrid team, and whether a conversational interaction style is useful for obtaining such explanations quickly and easily. The content therefore falls into three broad areas: Explainable AI, Human roles in explanations, and conversational explanations. This talk is an abridged subset of the material on this topic presented as a tutorial at CogSIMA (IEEE Conference on Cognitive and Computational Aspects of Situation Management) 2019, the details of which can be found on the DAIS ITA (Distributed Analytics and Information Science International Technology Alliance) Science Library http://sl.dais-ita.org/science-library/paper/doc-3382 (for more details such as speaker bio and intended audience please refer to the CogSIMA 2019 tutorials page) “