Integrating AI into Radiology: A Framework for Safer, Smarter Imaging

Summary
- The Miller School’s Dr. Jean Jose was part of a multidisciplinary team that created a comprehensive framework for integrating AI into radiology.
- The team began with Institutional Review Board (IRB)-approved pilot projects and evaluated infrastructure, staffing and financial feasibility.
- The study showed the proposed framework enhances diagnostic precision, reduces radiologist burnout and improves patient outcomes.
Artificial intelligence (AI) is reshaping radiology, offering transformative potential in diagnostic accuracy, workflow efficiency and patient care. In a recent study published in American Hospital & Healthcare Management, Jean Jose, D.O., and a multidisciplinary team from the University of Miami Miller School of Medicine presented a comprehensive framework for integrating AI into radiology practice.
“AI is something that is impacting every aspect of our lives,” said Dr. Jose, professor of clinical radiology and orthopaedics at the Miller School and medical director of radiology at Lennar Foundation Medical Center. “In medicine, its initial deployment has been most prominent in radiology. We are tied into technology. We analyze digital images and interact with advanced computer systems daily.”
The research team’s work in this first-in-the-nation deployment emphasizes the critical role of human intervention, ethical oversight and structured workflows to ensure the safe and effective use of technology to improve patient health.
In addition to Dr. Jose, Miller School researchers involved in the project included:
• Alexander McKinney, M.D., professor and chair of radiology
• Yiannis Chatzizisis, M.D., Ph.D., professor and chief of cardiovascular medicine
• Steven Falcone, M.D., M.B.A., professor and associate chair of radiology
• Fernando Collado-Mesa, M.D., professor and associate vice chair of AI research and ethical use in the Department of Radiology
• Jose M. Net, M.D., associate professor and associate vice chair of radiology
• Thiago Braga, M.D., assistant professor and associate vice chair of radiology
• Chloe Issa, M.D., a member of the Miller School’s Class of 2025
Research Context and Objectives
The study addresses a pressing challenge: how to incorporate AI-generated findings into clinical radiology workflows while maintaining regulatory compliance and patient trust. AI tools, particularly those using natural language processing, can automate triage, prioritize imaging studies and flag critical conditions such as stroke or pulmonary embolism.
However, without clear protocols and human oversight, these tools risk undermining care quality and ethical standards. With no current national governance for this new paradigm, Dr. Jose and team had to create their own study boundaries.

“We first created language definitions and then categories,” he said. “At this point, every institution is responsible for its own deployment and governance. There’s no overarching body that says, ‘We’ve got you covered.’”
Methodology: A Phased, Real-World Approach
The team adopted a phased implementation strategy, beginning with Institutional Review Board (IRB)-approved pilot projects. They evaluated infrastructure, staffing and financial feasibility, adjusting workflows based on real-time feedback.
“The output of AI in imaging is not all the same,” said Dr. Jose. “Some of these findings require a certain level of urgency and supervision. Others do not. So we needed to start with a common language.”
The study categorizes AI-generated imaging findings into five distinct workflow categories, each with specific regulatory and clinical requirements.
• ANIF-C: Actionable, non-incidental, critical findings (e.g., intracranial hemorrhage) requiring immediate intervention.
• AIF-C: Actionable, incidental, critical findings (e.g., incidental pulmonary embolism) managed via point-of-care AI deployment (POCAID).
• ANIF-NC: Actionable, non-incidental, non-critical findings communicated post-discharge.
• AIF-NC: Actionable, incidental, non-critical findings requiring patient follow-up and informed consent.
• Non-FDA Cleared: Findings from experimental algorithms requiring IRB approval and patient consent.
These categories help streamline decision-making and ensure appropriate clinical responses.
Key Findings and Innovations
One of the study’s most impactful contributions is the development of POCAID workflows for critical incidental findings. For example, when AI detects an incidental pulmonary embolism, the patient remains in the imaging center under supervision while radiologists validate the finding. Advanced Practice Providers (APPs) stabilize the patient and coordinate emergency care, significantly reducing time to treatment.
“AI is providing these results within two to five minutes of you finishing your scan,” Dr. Jose said. “With critical findings, our nurses stabilize patients and our clinical coordinators validate the findings and coordinate care immediately.”
That immediate, crucial action proved beneficial to patients in a number of important, measurable outcomes.
“We’re seeing decreased mortality and patients are in the hospital for fewer days,” said Dr. Jose. “This is pretty mind-blowing, actually.”
For non-critical findings like incidental coronary artery calcification (iCAC), the team piloted a follow-up system using AI-based tracking tools. Patients with moderate to high iCAC scores are contacted by coordinators for telehealth consultations and cardiology referrals.
“Once the data is analyzed,” said Dr. Jose, “our nurse practitioners can talk to the patient and suggest a referral to a cardiologist. We close the loop.”
This proactive approach ensures timely intervention and enhances preventive care.
Ethical and Operational Considerations
The study underscores the importance of informed consent, especially for non-FDA-cleared algorithms. While FDA-cleared tools do not require written consent, patients must be informed of AI’s role in their care. The authors also highlighted the need for local validation of AI models, as algorithm performance can vary across institutions due to differences in equipment and patient demographics.
To support these workflows, the Miller School invested in specialized training for advanced practice providers and clinical coordinators, as well as IT infrastructure. Even with these necessary expenses, a theoretical business model projected an annual revenue increase of nearly $200,000 from improved detection and follow-up of missed coronary interventions, justifying the investment.
The model, the study showed, introduces potentially massive patient benefits and is cost-effective.
Implications for Patient Care
By integrating AI with human oversight, the proposed framework enhances diagnostic precision, reduces radiologist burnout and improves patient outcomes. It also fosters transparency and trust, essential for the ethical use of AI in medicine.
Dr. Jose and colleagues have created a replicable model for AI integration in radiology that balances technological innovation with clinical responsibility. The results of their study have spurred deployment into other areas of the UHealth—University of Miami Health System.
“Now that this workflow has been validated, and because our results are so good,” said Dr. Jose, “we’re expanding to Sylvester Comprehensive Cancer Center and then to UHealth Doral and UHealth SoLé Mia. Eventually, we’ll be at all of our outpatient locations.”
Microsoft Copilot contributed to this article, which was reviewed and approved by Dr. Jean Jose.
Tags: AI, artificial intelligence, Department of Radiology, Dr. Alex McKinney, Dr. Alexander McKinney, Dr. Fernando Collado-Mesa, Dr. Jean Jose, Dr. Jose Net, Dr. Steven Falcone, Dr. Thiago Braga, Dr. Yiannis Chatzizisis, technology