Using AI Responsibly
The Miller School’s Dr. Azizi Seixas and Dr. Ferdinand Zizi published a Health Affairs opinion piece that provides guidelines for the use of artificial intelligence in health care.
Two University of Miami Miller School of Medicine experts in artificial intelligence (AI) and the use of technology in health care joined a colleague from the University of Miami’s Herbert Business School to critique a ruling that mandates technological transparency when using AI.
In an opinion piece published in Health Affairs, Azizi Seixas, Ph.D., interim chair, Department of Informatics and Health Data Science and associate professor of psychiatry and behavioral sciences at the Miller School, Ferdinand Zizi, director of the Miller School’s Department of Informatics and Health Data Science and Niam Yaraghi, Ph.D., an associate professor of business technology at Herbert Business School, commended HT-1 while stressing it’s an imperfect document.
HT-1 is the the Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing final ruling. It implements parts of the 21st Century Cures Act that regulates medical product development and innovation.
Foremost amongst HT-1’s tenets is the use of AI in health care. In the article, the authors celebrate HT-1 as a step toward “responsible AI algorithms,” writing, “It emphasizes unbiased decision making, patient safety, and health equity. And by requiring access to large, reliable data sets, the rule promises to significantly boost the development and refinement of AI technologies in health care, ensuring that they are both effective and equitable.”
Despite HT-1’s laudable developments, the authors warn the measure falls short in requiring absolute “accountability, transparency and fairness” for AI algorithms that contribute to the care of patients. To fully maximize HT-1’s effectiveness, the authors recommend:
• Data standardization: AI must be able to access diverse sets of data and use them in a way that recognizes social, behavioral and environmental indicators of health.
• Transparency audits: AI should be “explainable,” meaning we must be able to understand how AI does its work.
• Definition standardization: What’s fairness? The authors note that even the overarching goal of HT-1 is not specifically defined. Strict definitions for key elements of technology’s aims should be developed.
• Auditing: AI developers must demonstrate that their algorithms are achieving their intended objectives.
Read the entire piece at Health Affairs.
The Miller School and AI
AI opens a new chapter in the Gordon Center’s innovative medical training and education programs. Read more
AI opens a new chapter in the Gordon Center’s innovative medical training and education programs. Read more
Dr. Latha Chandran spoke about the changes artificial intelligence (AI) is bringing to medicine at the “2024 Business of Health Care Conference.” Read more
Tags: AI, artificial intelligence, Dr. Azizi Seixas, Dr. Ferdinand Zizi