Profile
I am a consultant radiologist at New Cross Hospital, Wolverhampton, where I have worked since 2009. My clinical practice covers the full general radiology service — CT, MRI, ultrasound, fluoroscopy, and plain film reporting — with subspecialty interests in chest imaging, head and neck, cardiac CT, and image-guided biopsy and drainage. I have a particular interest in interstitial lung disease and, more recently, lung cancer screening through the Targeted Lung Health Check programme.
My background is unusual for a radiologist. I read Natural Sciences at Cambridge, then completed a PhD in quantum optics at the University of London, before working for two years as an analyst at Logica on machine vision and signal processing projects for UK Government departments. I subsequently retrained in medicine at Birmingham, qualifying with honours in 1999.
Since 2020 I have developed a parallel strand of work in digital medicine, applying natural language processing, large language models, and clinical AI to problems in radiology audit, education, and service improvement. I held an NHS England Topol Fellowship (2022–23) and was Radiology Lead for the West Midlands Clinical Research Network (NIHR, 2022–24). I am a Visiting Researcher at the University of Surrey, working on human-AI cooperation.
Running through all of this is a single conviction, argued in the BMJ in 2023 and embodied in The DigitalElf Project: that clinicians who hold deep domain knowledge are the right people to build the tools their specialty needs, and that — with current AI tooling — the technical barrier to doing so has become low enough that waiting for industry to build those tools is now no longer always necessary.
Research & Digital Interests
-
CLEARaudit — LLM-based radiology audit at scaleCLEARaudit is the radiology audit workstream delivered through the National CLEAR Programme and its partner 33n. As clinical lead, I have worked with CLEAR to assemble network of NHS imaging departments and establish multicentre data sharing and analysis.Full programme description ↓
-
Clinical AI deployment and evaluationLeading a departmental project with harrison.ai to implement AI triage for the lung cancer pathway and acute CT brain reporting. Previous work included evaluation of chest radiograph models as part of the Topol Fellowship.
-
Human-AI cooperation in medical imaging (PecMan, University of Surrey)Visiting researcher at the Centre for Vision, Speech and Signal Processing (Prof. Gustavo Carneiro's group). The PecMan project investigated the optimisation of human-AI complementarity and algorithmic fairness in mammogram interpretation. We have plans to extend this work in wider clinical domains involving multi-modal data in projects currently seeking EPSRC support.
-
Error detection and quality improvement — REALMUsing language models to identify cases suitable for error analysis in acute radiology, producing targeted educational material and supporting departmental quality improvement.
-
Teaching case harvesting — Auto-TeaCHCo-developer (with Roger Marlow) of Auto-TeaCH, an application that catalogues radiology reports at scale to find and aggregate teaching material within hospital systems.
-
Radiology sustainabilityThree posters at ECR 2025 (Planet Radiology session) examining LLM-facilitated analysis of CT brain follow-up recommendation rates, CTPA yield assessment at scale, and lossless PACS image compression.
-
Digital education and the DigitalElf ProjectWith Roger Marlow I co-founded The DigitalElf Project, which documents how clinicians can engage productively with digital medicine using open-source tools and code generation.
CLEARaudit
CLEARaudit is the radiology audit workstream of the National CLEAR Programme — Clinically-Led workforcE and Activity Redesign — a broader NHS initiative addressing workforce planning and service redesign across clinical specialties. I lead the radiology data science within CLEARaudit, having advocated multicentre collaboration and convening imaging network partners. The work is conducted within a trusted research environment operated by 33n, enabling compliant processing of identifiable NHS data across participating trusts.
The central hypothesis is that LLM-based audit, applied at scale across a network of NHS departments, can identify variation in radiological practice that escapes conventional point-in-time audit. Structured feedback to departments is expected to reduce unwarranted variation, improve reporting standardisation, and reduce unnecessary downstream imaging and its associated cost and radiation burden.
More on audit automation in the CLEARtalk podcast.
Publications & Presentations
Positions Held
Consultant Radiologist
2009 – presentGeneral radiology service (CT, MRI, ultrasound, fluoroscopy, plain film), including image-guided biopsy and drainage, acute emergency rota, and subspecialty work in chest imaging, head and neck, and cardiac CT. Clinical tutor to radiology trainees 2014–2022; FRCR Part 1 Physics Examiner 2017–2022.
Visiting Researcher
2024 – presentMember of Professor Gustavo Carneiro's group, contributing to the PecMan project on human-AI cooperation and algorithmic fairness in mammogram interpretation.
Radiology Lead, West Midlands Clinical Research Network
2022 – 2024NIHR-funded role supporting radiology research in the West Midlands, with a personal focus on natural language processing of radiology reports for clinical audit at scale.
Topol Fellow
2022 – 2023Fellowship in digital transformation in health. Project work centred on the installation and clinical evaluation of AI-assisted chest radiograph reporting.
Analyst
1992 – 1994Machine vision and signal processing, using analysis and programming, on projects contracted by UK Government departments.
Collaborators
The National CLEAR Programme (Clinically-Led workforcE and Activity Redesign) is a national NHS initiative addressing workforce planning and service redesign across clinical specialties, of which CLEARaudit is the radiology audit workstream. The programme operates through 33n's trusted research environment, which provides compliant infrastructure for multicentre processing of NHS data. I lead the radiology data science within CLEARaudit.
DetectedX is a platform for AI-assisted radiology education and perception training, used internationally for case-based learning and performance benchmarking. I am exploring its application within the West Midlands Imaging Academy, alongside the use of generative AI to curate training material and support trainee portfolio development.
I work with Professor Gustavo Carneiro's group on the PecMan project, which investigates the optimisation of human-AI complementarity and algorithmic fairness in mammogram interpretation. My role is to interface machine learning research with clinical practice and contribute to the translation of findings into radiologically meaningful outcomes.
AI Strategy
This section is access-restricted.
Two Parallel Revolutions
in Radiology AI
The case for LLMs in governance & education · The radiologist as programmer
The decade-defining opportunity in radiology AI is not better image perception alone. It is the emergence of two parallel streams — one that gives radiology a nervous system for learning and accountability, and one that gives every radiologist the power to build.
Radiology has accumulated an extraordinary and almost entirely unmined asset: decades of free-text reports, sitting in RIS systems across every trust in the country. These reports encode clinical reasoning, diagnostic thresholds, uncertainty, recommendation behaviour, and communication quality — at scale, across time, across institutions.
Large language models can read this archive. That sentence should be understood as a structural claim, not a technical one. It means that, for the first time, the output of radiology as a discipline is legible to a system capable of measuring it.
No specialty can improve systematically without a feedback architecture connecting its output to institutional performance. Radiology has almost none. LLMs are the plausible mechanism for building it — using data that already exists, deployable now, without PACS integration or imaging data governance barriers.
Measuring What Radiology Actually Produces
LLMs can classify whether reports meet structured quality standards, whether conclusions are unambiguous, whether PE is confirmed or excluded, whether incidental findings are appropriately escalated. This is yield measurement at population scale — currently impossible without them.
Classifying Failure Modes, Not Just Errors
When AI and radiologist disagree, an interesting question is not "who was right?" but "what does the disagreement reveal about report quality?" LLM-based classification reframes AI validation as a probe of communication standards. This inverts the usual defensiveness around AI performance.
Institutional Intelligence at Multi-Trust Scale
Recommendation drift, incidental finding rates, report ambiguity trends, outlier reporter behaviour — these are governable phenomena. They are currently invisible. LLMs across a regional network produce the intelligence layer that clinical directors and regulators need but do not have.
Adaptive, Evidence-Based Radiology Education
LLMs can generate case-based teaching from real report corpora, identify where trainee language diverges from consultant norms, and produce targeted feedback at individual and cohort level. Education grounded in actual institutional performance data — not curated textbook cases.
Closing the Loop That Has Never Been Closed
Correlating radiological conclusions with downstream coding, outcome data, and MDT decisions is now within reach. LLMs handle the heterogeneous text at every node of this loop. The result could deliver the first large scale audit infrastructure radiology has had.
The Data Advantage Is Already There
Every NHS trust has the raw material. No new data collection, no prospective study design, no imaging data governance hurdles. The barrier to entry is low; the yield is high; the timeline is now, not in five years.
The CLEARaudit workstream of the National CLEAR Programme (Clinically-Led workforcE and Activity Redesign) is an operational proof of this thesis. Across three workstreams — half a million acute CT reports from 15 Midlands trusts, a national CT colonography quality audit across 8 centres of excellence, and an NHSE-commissioned evaluation of four high-demand examination types — it produces structured governance intelligence from unstructured report text at a scale that no conventional audit process could match.
This is what a learning radiology system looks like. It does not require waiting for multimodal AI to mature, for PACS integration to be solved, or for regulatory frameworks to catch up with imaging AI. It requires taking seriously the information that radiologists have already produced.
There is a second transformation underway that receives almost no attention in radiology AI discourse, because it is happening to individuals rather than to systems. It concerns what happens when a clinician with deep domain knowledge can, for the first time, express that knowledge directly in code — without a software engineering intermediary.
AI coding assistants have collapsed the barrier between clinical insight and computational implementation. A radiologist who understands the nuance of PE reporting, the subtlety of nodule follow-up pathways, or the patterns of recommendation drift — but who previously had no route from that understanding to a working analytical tool — now has one. The translation layer has been removed.
# A radiologist wrote this. No software engineer required. # Domain knowledge expressed directly as executable logic. def classify_pe_conclusion(report_text: str) -> PEClassification: """ Classify CTPA report against structured PE yield taxonomy. Categories reflect clinical reality, not algorithmic convenience: - CONFIRMED_CENTRAL / CONFIRMED_SEGMENTAL / CONFIRMED_SUBSEGMENTAL - NEGATIVE_EXPLICIT / NEGATIVE_IMPLICIT - INDETERMINATE (technically limited, clinical correlation required) - INCIDENTAL_FINDING_DOMINANT """ prompt = build_classification_prompt( report=report_text, taxonomy=PE_TAXONOMY, quality_flags=REPORT_QUALITY_RUBRIC, few_shot_examples=VALIDATED_EXAMPLES ) return llm_classify(prompt, model="claude-sonnet") # This taxonomy was designed by someone who has read 10,000 CTPA reports. # That expertise is now encoded, reproducible, and scalable.
This is not about radiologists becoming software engineers. It is about the professional knowledge that resides in clinical experts — knowledge that has historically been locked in heads, expressed only in reports, and lost when those experts retire — becoming executable, auditable, and transferable.
Domain Knowledge Becomes Infrastructure
When a radiologist encodes their classification logic, reporting standards, or pathway criteria as code, that knowledge stops being personal and becomes institutional. It can be version-controlled, peer-reviewed, validated, and deployed at scale. This is a fundamentally new mode of knowledge transfer in medicine.
The Intermediary Problem Is Solved
The traditional path from clinical insight to software tool passes through months of requirements gathering, developer interpretation, and iterative miscommunication. AI coding assistants make that path direct. A consultant radiologist can move from idea to working pipeline in an afternoon.
Bespoke Tools at NHS Scale
"A radiologist who can build their own tools can address the specific clinical and governance problems of their department, their trust, their patient population. A long tail of clinical need — currently unserved by any vendor — becomes addressable.".
A New Career Dimension for the Specialty
Radiologists who develop computational fluency become architects of the systems that govern their specialty, not passive consumers of tools built elsewhere. This has implications for workforce planning, job satisfaction, research capacity, and the intellectual identity of radiology as a discipline.
The question is not whether radiologists should learn to code in the traditional sense. It is whether they should be able to express their clinical intelligence in a form that machines can execute. AI tooling has made the answer yes — and the implications are profound.
Two Streams, One Argument: Radiology Must Own Its Future
Stream One gives us
A feedback architecture between radiology output and institutional performance. Governance intelligence that does not currently exist. A learning system, not merely a reading service.
Stream Two gives us
Clinical experts who can build. Domain knowledge that becomes executable infrastructure. A specialty that authors its own tools rather than procuring them from vendors with misaligned incentives.
Together they address
The deepest failure of NHS radiology: that it generates vast knowledge, captures almost none of it systematically, and has no feedback architecture for improvement. These streams build that architecture.
The RCR's role
To recognise this as a professional agenda, not merely a technical one. To establish standards for LLM-based governance tools. To champion computational fluency as a legitimate and valued radiologist competency.
Image AI will make individual radiologists faster and safer. LLM-based governance and the radiologist-as-programmer will make radiology, as a system and as a profession, fundamentally different. Both matter. Only one of them is currently being planned for.
This Website Is the Argument
The site you are reading was built from scratch using Claude Code — steered intermittently by a consultant radiologist attending to household duties on a Saturday morning. Total human input: under two hours. The result is a fully responsive, professionally designed personal academic website, now live at david-rosewarne.net, hosted at no cost on GitHub Pages.
No web developer was commissioned. No agency was briefed. No procurement cycle was initiated. The radiologist described what they wanted; the tool built it; the radiologist corrected and refined; the tool rebuilt. The iteration was rapid, the output professional, and the cost zero.
What this demonstrates
The barrier between clinical expertise and professional digital presence has effectively collapsed. A radiologist who wishes to communicate their work — to the College, to collaborators, to the wider profession — no longer needs institutional communications support, a budget, or technical intermediaries. The tools exist. The capability is personal.
What it implies for the College
If a radiologist can build a credentialled academic website in a Saturday morning, they can build reporting audit pipelines, referral decision tools, and educational platforms in a week. The constraint is no longer technical access — it is awareness that the constraint has been removed. The RCR is well-placed to close that awareness gap.
The communication case
Radiologists who can build and publish — who can put their reasoning, their evidence, and their governance work in front of an audience without friction become a profession of better communicators with increased influence.
The compounding effect
Each radiologist who develops this capability becomes a node: publishing tools, sharing pipelines, demonstrating what is possible. The DigitalElf Project is built on this principle. The College could accelerate it enormously — through recognition, through training, and through infrastructure that makes sharing the default.
The argument of this document is not abstract. The document's own existence — and the website that carries it — is a worked example of both streams in action: domain knowledge expressed as a structured case, delivered through a tool built without a development team, communicated to a professional audience at zero marginal cost. That is the power now available to every radiologist desiring it.