August 2020

NEWS

Opinion
The unwanted partner


by Josh Young, MD

I have a concern about medical artificial intelligence (AI). It’s not that AI isn’t going to work or that AI is going to replace me. I’m worried that medical AI will be accurate and helpful, powerful and valuable, and that no one will use it.
Before the advent of medical AI, physicians were introduced to clinical decision software. These were heuristic systems that gave clinical guidance based on a set of pre-programmed rules. These systems were largely rejected, not only because they were rigid and rules-based but also because physicians do not like to take advice from machines. The expert in this area, Zhiping Walter, PhD, University of Colorado, talks about self-regulation as an important piece of what it means to be a physician.1 We think that only fellow physicians are competent to judge our clinical performance and decisions. I know that when an insurance carrier challenges the necessity of a surgery or medication, I feel affronted. If the guidance of an AI system second guesses our decisions, will we be similarly offended?
Airline pilots seem to accept that they abdicate a bit of their autonomy to automated flight systems. Can we not do the same? Of course, commercial pilots have no say in the matter. If they want to fly for a carrier, they must accept the systems the carrier employs. Will we be required to use the AI tools healthcare or insurance companies demand in order to be paid? Ekaterina Jussupow and colleagues presented a metastudy in 2018 showing that physician resistance to clinical decision systems results in non-use and implementation failure.2
The way around this morass is physician education and AI interface design. AI systems, as they exist today, constitute what is known as “narrow AI.” They have analytical capabilities within a focused set of parameters. They produce answers, but they do not know what these answers mean. An AI system designed to detect diabetic macular edema will not recognize AMD. To make use of these powerful systems, ophthalmologists need to understand how the models underlying these systems work. Indeed, far from being confined by decisions that AI make, ophthalmologists will gain an important new skill set as AI experts. In the same way we learned to interpret OCT images, we will learn to interpret AI.
Unfortunately, this interpretation carries with it another sort of risk, that of medical liability. As new as medical AI is, AI liability case law is even newer.3,4 If an AI makes a clinical recommendation that the physician chooses not to follow and a poor outcome results, has the physician put himself at risk? More to the point, if the physician does follow the guidance of the AI and a poor outcome ensues, is the physician at risk for having followed the guidance of the AI?
There are a variety of AI strategies, and one popular architecture, particularly in the field of image recognition, is the convolutional neural network (CNN). Although CNNs can be trained to make accurate diagnoses of diabetic retinopathy, it is difficult to understand how the system arrives at its diagnoses. Computation is performed within “hidden layers” of the neural network, and these layers are hidden not only computationally but they are also hidden, in a real sense, from the physician. If an AI makes a misdiagnosis, its reason for having done so is opaque to the physician. The liability arising from “black box medicine” is an area of active discussion today.5
Of course, one way to avoid these legal perils is to reject clinical AI entirely. This is precisely the outcome we need to avoid.
Medical AI is the most significant advancement in many years. It holds the promise to individualize and democratize healthcare in unprecedented ways. It is important that we foster this development through education, interface design, and legislation so that AI can become the partner we all desire.

About the doctor

Josh Young, MD

Clinical professor
Department of ophthalmology
New York University Grossman School of Medicine
New York, New York

References

1. Walter Z, Lopez MS. Physician acceptance of information technologies: Role of perceived threat to professional autonomy. Decision Support Systems. 2008;46:206–215.
2. Jussupow E, et al. I am; We are – Conceptualizing Professional Identity Threats from Information Technology. 2018 International Conference on Information Systems.
3. Ordish J. Legal liability for machine learning in healthcare. PHG Foundation, University of Cambridge. 2018.
4. Price WN, et al. Potential liability for physicians using artificial intelligence. JAMA. 2019;322:1765–1766.
5. Harned Z, et al. Machine Vision, Medical AI, and Malpractice. Harv. J.L. & Tech. Dig. 2019.

Contact

Young: jyoungmd@gmail.com

The unwanted partner The unwanted partner
Ophthalmology News - EyeWorld Magazine
283 110
283 110
,
2020-07-27T11:15:46Z
True, 8