University of Sheffield joins six leading UK universities and 23 partners, to develop solutions for effective and ethical uses of AI

Kathryn Simpson, Lecturer in Digital Humanities, will be joining 25 researchers on the new Participatory Harm Auditing Workbenches and Methodologies (PHAWM) project, developing solutions for effective and ethical uses of generative and predictive AI.

Kathryn Simpson
  • The Participatory Harm Auditing Workbenches and Methodologies project has been awarded £3.5million of the £12million awarded to a series of breakthrough AI projects.
  • The University of Glasgow will lead the consortium, with support from colleagues at the Universities of Sheffield, Edinburgh, Stirling, Strathclyde, York and King’s College London.
  • Researchers will develop new methods for maximising the potential benefits of predictive and generative AI.

AI is a fast-moving field, with the developments at risk of outpacing the ethical thoughts and processes needed to ensure technological advancements are created and used in ways that reduce the risk of harm. The project, Participatory Harm Auditing Workbenches and Methodologies (PHAWM), will address this challenge and seek to find a balance between progress and ethics.

The project will pioneer participatory AI auditing, where non-experts including regulators, end-users and people likely to be affected by decisions made by AI systems will play a role in ensuring that those systems provide fair and reliable outputs.

In the generative AI use cases, the project will look at cultural heritage and collaborative content generation. It will explore the potential of AI to deepen understanding of historical materials without misrepresentation or bias, and how AI could be used to write accurate Wikipedia articles in under-represented languages without contributing to the spread of misinformation.

Together, they will develop new methods for maximising the potential benefits of predictive and generative AI while minimising their potential for harm arising from bias and ‘hallucinations’, where AI tools present false or invented information as fact. 

Kathryn Simpson, Lecturer in Digital Humanities at the Digital Humanities Institute, said: “I am incredibly excited to be part of this innovative and culturally prescient project. Although AI already affects people’s lives through decision support systems, information search, and AI-generated text and images, a significant barrier to the safe and trustworthy deployment of predictive and generative AI is their unassessed potential for harm. This project is a dynamic collaboration between 7 institutions and 23 sector and commercial partners to create methods which will allow stakeholders to accurately audit the AI systems they are using. Working in the areas of Cultural Heritage, Health, Media and Collaborative Content Generation, we will address the foundational issue in the application of AI: identifying an AI that is biassed, inaccurate or unfair.”

The project will develop new tools to support the auditing process in partnership with relevant stakeholders, focusing on four key use cases for predictive and generative AI, and create new training resources to help encourage widespread adoption of the tools.

The predictive AI use cases in the research will focus on health and media content, analysing data sets for predicting hospital readmissions and assessing child attachment for potential bias, and examining fairness in search engines and hate speech detection on social media.