Members of the Digital Humanities theme within AIHS combine interdisciplinary expertise spanning the digitisation of texts and collaborative digital libraries, the history of technology, computational modelling and the development of computational approaches for humanities data and cultural phenomena. Example projects include: making East Asian printed books accessible to a wider audience, modelling musical structure, exploring trust and empathy in conversational agents, researching the history of the audio CD format in the UK.
Premodern China in the digital age
Building on experience and data produced while creating the Chinese Text Project digital library (Wikipedia), my current work continues to focus on applying digital technologies to aid in our understanding of the language and history of premodern China. Recently this has included applying crowdsourcing to the task of creating Linked Open Data to record structured, precise, and explicitly sourced data on historical people, events, bureaucratic structures, literature, geography and astronomical observations evidenced in early Chinese texts. Ongoing work involves creating Natural Language Processing models sufficiently accurate to greatly augment the accessibility of premodern texts to modern readers, and evaluating the effects of different types of computer-mediated reading assistance offered to readers of historical literature.
Opening the Red Book
A Recipe for Perceived Emotions in Music
Annaliese Micallef Grimaud
Have you ever listened to an unfamiliar piece of music and perceived it as sounding sad or happy? The answer is probably yes.
A common follow-up question is: How does this work?
One of my research projects involved looking at how different musical features such as tempo and loudness are used to embed different emotions in a musical piece. This was attained by letting music listeners themselves show me how they think different emotions should sound like in music, by asking them to change instrumental tonal music in real-time via a combination of six or seven musical features using a computer interface I created called EmoteControl. This work identified how different combinations of tempo, pitch, dynamics, brightness, articulation, mode, and later, instrumentation, helped convey different emotional expressions through the music.
Find out how 6 musical features were used to convey 7 emotions (anger, sadness, fear, joy, surprise, calmness, and power) and whether they were successful or not here: https://journals.sagepub.com/doi/10.1177/20592043211061745
Find out how the same 6 musical features plus the option to change the instrument playing were used to convey the same 7 emotions here: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0279605
Computational methods for/with human harmonic analysis
Much of my recent work centres on the creation, curation, and analysis of musical dataset, particularly human analyses in encoded formats. For example, for harmonic analysis I’ve been curation the meta-corpus which and used that for …
research projects using that data for automatic harmonic analysis with machine learning.
See especially “Augmentednet” (, , )
… and before that
… as well as related tasks studies like What if the When implies the What? (, , )
research on that data in itself (“Chromatic Chords in Theory and Practice ” ISMIR 2023, coming soon)
pedagogical / public-facing anthology and provided as part of the
Recursive Bayesian Networks
Real-world data – such as sounds, images, music, or videos – are continuous and noisy. Yet they exhibit highly complex dependency structures that are challenging to identify and may be ambiguous. Think of someone making an ambivalent joke that can be interpreted in multiple ways, an image of a complex scene with multiple people interacting in various ways, or a piece of music that evokes a multitude of emotions.
Recursive Bayesian Networks allow for modelling these intricate cases where a variety of different possible structures need to be taken into account. They generalise probabilistic context-free grammars – commonly used for modelling natural language – to allow for continuous, gradual aspects to be represented.