Oxford Clinical AI Research for Enhanced Safety

A cross-disciplinary collaboration designing, evaluating, and translating trustworthy AI to enhance patient safety.

Engineering

Engineers at the Institute of Biomedical Engineering, led by Professor Alison Noble, develop robust algorithms and open, reproducible workflows for clinical AI. See Noble Group.

Psychology

Researchers in the Department of Experimental Psychology, led by Nick Yeung, study how clinicians use AI tools in decision-making and how to support effective human–AI teaming. See Attention & Cognitive Control Lab.

Clinical practice

Clinicians at OxSTaR, led by Professor Helen Higham, focus on human factors and simulation-based training to evaluate AI-enabled pathways and improve safety in real clinical settings. See OxSTaR.

Research themes

  • Trustworthy and transparent AI
  • Human factors and clinical simulation
  • Decision support and workflow integration
  • Evaluation frameworks and safety metrics

Recent highlights

  • Collaborative projects across engineering, psychology, and clinical simulation
  • Workshops on safe deployment and evaluation of AI tools
  • Preprints and publications on human–AI decision-making