Nine research teams in the UK have been granted funding by the Advanced Research and Invention Agency (ARIA) as part of the Technical Area 3 (TA3) programme. The focus is on developing mathematical and computational methods to ensure advanced AI systems meet safety standards. The aim is to deploy these systems responsibly in critical sectors like infrastructure, healthcare, and manufacturing. Two of the projects funded are led by researchers at the University of Oxford.
The first project, "Towards Large-Scale Validation of Business Process Artificial Intelligence (BPAI)," is managed by Professors Nobuko Yoshida and Dave Parker from the Department of Computer Science at Oxford. The team will use probabilistic process models and the PRISM verification toolset to provide formal, quantitative guarantees for AI-based systems in Business Process Intelligence (BPI). They hope to develop a workflow to analyze automated BPI solutions against safety benchmarks, with contributions from Dr. Adrián Puerto Aubel and Joseph Paulus. Professors Yoshida and Parker noted, "Through the Safeguarded AI programme, ARIA is creating space to explore rigorous, formal approaches to AI safety. Our project addresses the challenge of verifying AI-based business process systems using probabilistic models and automated analysis techniques."
The second project, "SAGEflex: Safeguarded AI Agents for Grid-Edge Flexibility," focuses on developing a scalable AI-based framework to manage millions of grid-edge devices used in Great Britain's net-zero power grid. Managed by Associate Professor Thomas Morstyn and Professor Jakob Foerster from Oxford’s Department of Engineering Science, the team will explore multi-agent reinforcement learning (MARL), aiming to develop safety specifications and a software platform. Professor Morstyn commented, "Our project was motivated by the lack of rigorous approaches to AI safeguarding for power system applications, which we identified as the fundamental gap for industry adoption."
Both projects seek to establish quantifiable safety standards for AI applications in their respective fields. More details about the Safeguarded AI programme can be found on the ARIA website.