Profile

I am a consultant radiologist at New Cross Hospital, Wolverhampton, where I have worked since 2009. My clinical practice covers the full general radiology service — CT, MRI, ultrasound, fluoroscopy, and plain film reporting — with subspecialty interests in chest imaging, head and neck, cardiac CT, and image-guided biopsy and drainage. I have a particular interest in interstitial lung disease and, more recently, lung cancer screening through the Targeted Lung Health Check programme.

My background is unusual for a radiologist. I read Natural Sciences at Cambridge, then completed a PhD in quantum optics at the University of London, before working for two years as an analyst at Logica on machine vision and signal processing projects for UK Government departments. I subsequently retrained in medicine at Birmingham, qualifying with honours in 1999.

Since 2020 I have developed a parallel strand of work in digital medicine, applying natural language processing, large language models, and clinical AI to problems in radiology audit, education, and service improvement. I held an NHS England Topol Fellowship (2022–23) and was Radiology Lead for the West Midlands Clinical Research Network (NIHR, 2022–24). I am a Visiting Researcher at the University of Surrey, working on human-AI cooperation.

Running through all of this is a single conviction, argued in the BMJ in 2023 and embodied in The DigitalElf Project: that clinicians who hold deep domain knowledge are the right people to build the tools their specialty needs, and that — with current AI tooling — the technical barrier to doing so has become low enough that waiting for industry to build those tools is now no longer always necessary.

Research & Digital Interests

  • CLEARaudit — LLM-based radiology audit at scale
    CLEARaudit is the radiology audit workstream delivered through the National CLEAR Programme and its partner 33n. As clinical lead, I have worked with CLEAR to assemble network of NHS imaging departments and establish multicentre data sharing and analysis.
    Full programme description ↓
  • Clinical AI deployment and evaluation
    Leading a departmental project with harrison.ai to implement AI triage for the lung cancer pathway and acute CT brain reporting. Previous work included evaluation of chest radiograph models as part of the Topol Fellowship.
  • Human-AI cooperation in medical imaging (PecMan, University of Surrey)
    Visiting researcher at the Centre for Vision, Speech and Signal Processing (Prof. Gustavo Carneiro's group). The PecMan project investigated the optimisation of human-AI complementarity and algorithmic fairness in mammogram interpretation. We have plans to extend this work in wider clinical domains involving multi-modal data in projects currently seeking EPSRC support.
  • Error detection and quality improvement — REALM
    Using language models to identify cases suitable for error analysis in acute radiology, producing targeted educational material and supporting departmental quality improvement.
  • Teaching case harvesting — Auto-TeaCH
    Co-developer (with Roger Marlow) of Auto-TeaCH, an application that catalogues radiology reports at scale to find and aggregate teaching material within hospital systems.
  • Radiology sustainability
    Three posters at ECR 2025 (Planet Radiology session) examining LLM-facilitated analysis of CT brain follow-up recommendation rates, CTPA yield assessment at scale, and lossless PACS image compression.
  • Digital education and the DigitalElf Project
    With Roger Marlow I co-founded The DigitalElf Project, which documents how clinicians can engage productively with digital medicine using open-source tools and code generation.
Delivered through the National CLEAR Programme & 33n

CLEARaudit

CLEARaudit is the radiology audit workstream of the National CLEAR Programme — Clinically-Led workforcE and Activity Redesign — a broader NHS initiative addressing workforce planning and service redesign across clinical specialties. I lead the radiology data science within CLEARaudit, having advocated multicentre collaboration and convening imaging network partners. The work is conducted within a trusted research environment operated by 33n, enabling compliant processing of identifiable NHS data across participating trusts.

The central hypothesis is that LLM-based audit, applied at scale across a network of NHS departments, can identify variation in radiological practice that escapes conventional point-in-time audit. Structured feedback to departments is expected to reduce unwarranted variation, improve reporting standardisation, and reduce unnecessary downstream imaging and its associated cost and radiation burden.

1
Midlands ED in CT
Investigating yield and levers for pathway change in CTPA and CT abdomen and pelvis examinations for acute patients across 15 Midlands trusts. The dataset will gather more than half a million reports produced between 2018 and 2024, enabling LLM-based classification of diagnostic yield, indication quality, and inter-institutional variation at a scale not previously achievable.
2
National CTC Quality Audit
Eight centres of excellence are submitting CT colonography report data for LLM-based evaluation of polyp detection rates, examination quality, and reporting standards — providing a national benchmark for service evaluation and improvement in a technically demanding and relatively low-volume modality.
3
National Demand Optimisation
NHSE has commissioned an evaluation of four high-demand, often low-yield examination types: CT brain, CT abdomen and pelvis, MRI lumbar spine, and transvaginal ultrasound. The workstream uses LLM audit to identify unwarranted variation in referral patterns and reporting outcomes, and to define optimisation opportunities at system level.

More on audit automation in the CLEARtalk podcast.

Publications & Presentations

Journal articles & preprints — 2020 onwards
Zhang Z, Ai W, Wells K, Rosewarne D, Do T-T, Carneiro G. Learning to Complement and to Defer to Multiple Users. European Conference on Computer Vision 2024; pp 144–162. Springer.
Masroor M, Hassan T, Tian Y, Wells K, Rosewarne D, Do T-T, Carneiro G. Fair Distillation: Teaching Fairness from Biased Teachers in Medical Imaging. arXiv preprint arXiv:2411.11939, 2024.
Stogiannos N, Gillan C, Precht H, Sá Dos Reis C, Kumar A, O'Regan T, …, Rosewarne D, et al. A multidisciplinary team and multiagency approach for AI implementation in medical imaging and radiotherapy. Journal of Medical Imaging and Radiation Sciences 2024; 55(4):101717.
Shelmerdine SC, Pauling C, Allan E, Langan D, Ashworth E, Yung K-W, …, Rosewarne D, Woznitza N, et al. Artificial intelligence for paediatric fracture detection: a multireader multicase study protocol. BMJ Open 2024; 14(12):e084448.
Rosewarne DM, Marlow RD. An elf service for the NHS: individual doctors can lead digital transformation from the bottom up. BMJ 2023; 383.
Fahim A, Rosewarne D. Tissue sampling in suspected sarcoidosis: can we avoid mediastinal procedures? American Journal of Respiratory and Critical Care Medicine 2020; 202(9):1321.
Conference presentations — ECR 2025
Rosewarne D. LLM-facilitated analysis of recommendation rate for further imaging in acute CT brain studies at a large district general hospital. European Congress of Radiology, Vienna, 2025.
Rosewarne D. LLM evaluation of CTPA yield at scale. European Congress of Radiology, Vienna, 2025.
Rosewarne D. Is your PACS overweight? The benefits of lossless image compression. European Congress of Radiology, Vienna, 2025.
Earlier articles & book chapters
Rosewarne DM. Intravascular contrast media. In: Chapman & Nakielny's Guide to Radiological Procedures, 6th ed. Elsevier, 2017.
Morgan AJ, Rosewarne D, Bapusamy A, Tanner K, Hancock J, Reid D, Mathews S. Application of the recent BTS guidelines to a population of nodule patients. Thorax 2016; 71(Suppl 3):A95. [link]
Aktuerk D, Lutz M, Rosewarne D, Luckraz H. Cheerios in the lung: a rare but characteristic radiographic sign. QJM 2015; 108(9):743–744.
Holloway BJ, Rosewarne D, Jones RG. Imaging of thoracic aortic disease. British Journal of Radiology 2011; 84(Suppl 3):S338–S354.
Rosewarne D, Reynolds JH, Trotter SE, Burge PS. The idiopathic interstitial pneumonias — a survival guide. Imaging 2008; 20(4):289–302.
Physics — doctoral and early career
Rosewarne D, Sarkar S. Rigorous theory of photon localizability. Quantum Optics 1992; 4(6):405.
Rosewarne DM. Ground-state quantum fluctuations in a scalar Hopfield dielectric. Quantum Optics 1991; 3(4):193.

Positions Held

Consultant Radiologist

2009 – present
New Cross Hospital, Wolverhampton

General radiology service (CT, MRI, ultrasound, fluoroscopy, plain film), including image-guided biopsy and drainage, acute emergency rota, and subspecialty work in chest imaging, head and neck, and cardiac CT. Clinical tutor to radiology trainees 2014–2022; FRCR Part 1 Physics Examiner 2017–2022.

Visiting Researcher

2024 – present
Centre for Vision, Speech and Signal Processing, University of Surrey

Member of Professor Gustavo Carneiro's group, contributing to the PecMan project on human-AI cooperation and algorithmic fairness in mammogram interpretation.

Radiology Lead, West Midlands Clinical Research Network

2022 – 2024
NIHR / NHS England

NIHR-funded role supporting radiology research in the West Midlands, with a personal focus on natural language processing of radiology reports for clinical audit at scale.

Topol Fellow

2022 – 2023
Health Education England / NHS England

Fellowship in digital transformation in health. Project work centred on the installation and clinical evaluation of AI-assisted chest radiograph reporting.

Analyst

1992 – 1994
Logica Defence and Civil Government

Machine vision and signal processing, using analysis and programming, on projects contracted by UK Government departments.

Collaborators

The National CLEAR Programme & 33n CLEARaudit — Clinical Lead
33n.co.uk

The National CLEAR Programme (Clinically-Led workforcE and Activity Redesign) is a national NHS initiative addressing workforce planning and service redesign across clinical specialties, of which CLEARaudit is the radiology audit workstream. The programme operates through 33n's trusted research environment, which provides compliant infrastructure for multicentre processing of NHS data. I lead the radiology data science within CLEARaudit.

DetectedX Digital Education
detectedx.com

DetectedX is a platform for AI-assisted radiology education and perception training, used internationally for case-based learning and performance benchmarking. I am exploring its application within the West Midlands Imaging Academy, alongside the use of generative AI to curate training material and support trainee portfolio development.

Centre for Vision, Speech and Signal Processing, University of Surrey Visiting Researcher
surrey.ac.uk / CVSSP

I work with Professor Gustavo Carneiro's group on the PecMan project, which investigates the optimisation of human-AI complementarity and algorithmic fairness in mammogram interpretation. My role is to interface machine learning research with clinical practice and contribute to the translation of findings into radiologically meaningful outcomes.

AI Strategy

This section is access-restricted.

Incorrect password.
Royal College of Radiologists · AI Strategy

Two Parallel Revolutions
in Radiology AI

The case for LLMs in governance & education  ·  The radiologist as programmer

The decade-defining opportunity in radiology AI is not better image perception alone. It is the emergence of two parallel streams — one that gives radiology a nervous system for learning and accountability, and one that gives every radiologist the power to build.

01
The governance & education stream
LLMs as the Connective Tissue of a Learning Radiology System

Radiology has accumulated an extraordinary and almost entirely unmined asset: decades of free-text reports, sitting in RIS systems across every trust in the country. These reports encode clinical reasoning, diagnostic thresholds, uncertainty, recommendation behaviour, and communication quality — at scale, across time, across institutions.

Large language models can read this archive. That sentence should be understood as a structural claim, not a technical one. It means that, for the first time, the output of radiology as a discipline is legible to a system capable of measuring it.

No specialty can improve systematically without a feedback architecture connecting its output to institutional performance. Radiology has almost none. LLMs are the plausible mechanism for building it — using data that already exists, deployable now, without PACS integration or imaging data governance barriers.

01 · yield

Measuring What Radiology Actually Produces

LLMs can classify whether reports meet structured quality standards, whether conclusions are unambiguous, whether PE is confirmed or excluded, whether incidental findings are appropriately escalated. This is yield measurement at population scale — currently impossible without them.

02 · failure

Classifying Failure Modes, Not Just Errors

When AI and radiologist disagree, an interesting question is not "who was right?" but "what does the disagreement reveal about report quality?" LLM-based classification reframes AI validation as a probe of communication standards. This inverts the usual defensiveness around AI performance.

03 · governance

Institutional Intelligence at Multi-Trust Scale

Recommendation drift, incidental finding rates, report ambiguity trends, outlier reporter behaviour — these are governable phenomena. They are currently invisible. LLMs across a regional network produce the intelligence layer that clinical directors and regulators need but do not have.

04 · education

Adaptive, Evidence-Based Radiology Education

LLMs can generate case-based teaching from real report corpora, identify where trainee language diverges from consultant norms, and produce targeted feedback at individual and cohort level. Education grounded in actual institutional performance data — not curated textbook cases.

05 · audit

Closing the Loop That Has Never Been Closed

Correlating radiological conclusions with downstream coding, outcome data, and MDT decisions is now within reach. LLMs handle the heterogeneous text at every node of this loop. The result could deliver the first large scale audit infrastructure radiology has had.

06 · scale

The Data Advantage Is Already There

Every NHS trust has the raw material. No new data collection, no prospective study design, no imaging data governance hurdles. The barrier to entry is low; the yield is high; the timeline is now, not in five years.

The CLEARaudit workstream of the National CLEAR Programme (Clinically-Led workforcE and Activity Redesign) is an operational proof of this thesis. Across three workstreams — half a million acute CT reports from 15 Midlands trusts, a national CT colonography quality audit across 8 centres of excellence, and an NHSE-commissioned evaluation of four high-demand examination types — it produces structured governance intelligence from unstructured report text at a scale that no conventional audit process could match.

This is what a learning radiology system looks like. It does not require waiting for multimodal AI to mature, for PACS integration to be solved, or for regulatory frameworks to catch up with imaging AI. It requires taking seriously the information that radiologists have already produced.

02
The implementation stream
The Radiologist as Programmer: AI Tooling as Professional Emancipation

There is a second transformation underway that receives almost no attention in radiology AI discourse, because it is happening to individuals rather than to systems. It concerns what happens when a clinician with deep domain knowledge can, for the first time, express that knowledge directly in code — without a software engineering intermediary.

AI coding assistants have collapsed the barrier between clinical insight and computational implementation. A radiologist who understands the nuance of PE reporting, the subtlety of nodule follow-up pathways, or the patterns of recommendation drift — but who previously had no route from that understanding to a working analytical tool — now has one. The translation layer has been removed.

ctpa_classifier.py · radiologist-authored
# A radiologist wrote this. No software engineer required.
# Domain knowledge expressed directly as executable logic.

def classify_pe_conclusion(report_text: str) -> PEClassification:
    """
    Classify CTPA report against structured PE yield taxonomy.
    Categories reflect clinical reality, not algorithmic convenience:
    - CONFIRMED_CENTRAL / CONFIRMED_SEGMENTAL / CONFIRMED_SUBSEGMENTAL
    - NEGATIVE_EXPLICIT / NEGATIVE_IMPLICIT
    - INDETERMINATE (technically limited, clinical correlation required)
    - INCIDENTAL_FINDING_DOMINANT
    """

    prompt = build_classification_prompt(
        report=report_text,
        taxonomy=PE_TAXONOMY,
        quality_flags=REPORT_QUALITY_RUBRIC,
        few_shot_examples=VALIDATED_EXAMPLES
    )

    return llm_classify(prompt, model="claude-sonnet")

# This taxonomy was designed by someone who has read 10,000 CTPA reports.
# That expertise is now encoded, reproducible, and scalable.

This is not about radiologists becoming software engineers. It is about the professional knowledge that resides in clinical experts — knowledge that has historically been locked in heads, expressed only in reports, and lost when those experts retire — becoming executable, auditable, and transferable.

claim 01

Domain Knowledge Becomes Infrastructure

When a radiologist encodes their classification logic, reporting standards, or pathway criteria as code, that knowledge stops being personal and becomes institutional. It can be version-controlled, peer-reviewed, validated, and deployed at scale. This is a fundamentally new mode of knowledge transfer in medicine.

claim 02

The Intermediary Problem Is Solved

The traditional path from clinical insight to software tool passes through months of requirements gathering, developer interpretation, and iterative miscommunication. AI coding assistants make that path direct. A consultant radiologist can move from idea to working pipeline in an afternoon.

claim 03

Bespoke Tools at NHS Scale

"A radiologist who can build their own tools can address the specific clinical and governance problems of their department, their trust, their patient population. A long tail of clinical need — currently unserved by any vendor — becomes addressable.".

claim 04

A New Career Dimension for the Specialty

Radiologists who develop computational fluency become architects of the systems that govern their specialty, not passive consumers of tools built elsewhere. This has implications for workforce planning, job satisfaction, research capacity, and the intellectual identity of radiology as a discipline.

The question is not whether radiologists should learn to code in the traditional sense. It is whether they should be able to express their clinical intelligence in a form that machines can execute. AI tooling has made the answer yes — and the implications are profound.

Synthesis

Two Streams, One Argument: Radiology Must Own Its Future

Stream One gives us

A feedback architecture between radiology output and institutional performance. Governance intelligence that does not currently exist. A learning system, not merely a reading service.

Stream Two gives us

Clinical experts who can build. Domain knowledge that becomes executable infrastructure. A specialty that authors its own tools rather than procuring them from vendors with misaligned incentives.

Together they address

The deepest failure of NHS radiology: that it generates vast knowledge, captures almost none of it systematically, and has no feedback architecture for improvement. These streams build that architecture.

The RCR's role

To recognise this as a professional agenda, not merely a technical one. To establish standards for LLM-based governance tools. To champion computational fluency as a legitimate and valued radiologist competency.

Image AI will make individual radiologists faster and safer. LLM-based governance and the radiologist-as-programmer will make radiology, as a system and as a profession, fundamentally different. Both matter. Only one of them is currently being planned for.

Postscript

This Website Is the Argument

The site you are reading was built from scratch using Claude Code — steered intermittently by a consultant radiologist attending to household duties on a Saturday morning. Total human input: under two hours. The result is a fully responsive, professionally designed personal academic website, now live at david-rosewarne.net, hosted at no cost on GitHub Pages.

No web developer was commissioned. No agency was briefed. No procurement cycle was initiated. The radiologist described what they wanted; the tool built it; the radiologist corrected and refined; the tool rebuilt. The iteration was rapid, the output professional, and the cost zero.

What this demonstrates

The barrier between clinical expertise and professional digital presence has effectively collapsed. A radiologist who wishes to communicate their work — to the College, to collaborators, to the wider profession — no longer needs institutional communications support, a budget, or technical intermediaries. The tools exist. The capability is personal.

What it implies for the College

If a radiologist can build a credentialled academic website in a Saturday morning, they can build reporting audit pipelines, referral decision tools, and educational platforms in a week. The constraint is no longer technical access — it is awareness that the constraint has been removed. The RCR is well-placed to close that awareness gap.

The communication case

Radiologists who can build and publish — who can put their reasoning, their evidence, and their governance work in front of an audience without friction become a profession of better communicators with increased influence.

The compounding effect

Each radiologist who develops this capability becomes a node: publishing tools, sharing pipelines, demonstrating what is possible. The DigitalElf Project is built on this principle. The College could accelerate it enormously — through recognition, through training, and through infrastructure that makes sharing the default.

The argument of this document is not abstract. The document's own existence — and the website that carries it — is a worked example of both streams in action: domain knowledge expressed as a structured case, delivered through a tool built without a development team, communicated to a professional audience at zero marginal cost. That is the power now available to every radiologist desiring it.