Unravelling the Potential of AI in Medical Practice: Trust, Mistrust and Distrust

Unravelling the Potential of AI in Medical Practice: Trust, Mistrust and Distrust

credit: @gapingvoid

This is the second in a six-part series being written to more deeply explore the incredible potential of AI in healthcare, specifically in the practices of family physicians and specialists in the community.  Much of this is incredibly exciting.  But doctors live in a world of evidence, and there are good reasons why mistrust and distrust come before our ability to trust these products fully.  I wanted to explore these more deeply here.

Doctors’ distrust of AI tools in healthcare can stem from several factors, many of which are related to concerns about patient safety, professional autonomy, and the limitations of AI technology. I present to you ten key reasons why doctors may be skeptical or hesitant about embracing AI tools.  These are not reasons to stop exploring; these are perspectives that hopefully will allow developers and AI companies to better understand how we think and how uptake might be maximized in the next year or two.

  1. Lack of Understanding: Many doctors may not fully understand how AI algorithms work, leading to a general lack of awareness about their capabilities and limitations. Furthermore, doctors are trained to question and rely heavily on evidence-based thinking, which can contribute to their slow adoption of AI tools. Specific education would help here.  Do we have the right programs to train health professionals and our learners in medical school and residency?
  2. Patient Safety: Doctors have a primary responsibility to ensure patient safety. They may be concerned that AI tools could make errors or provide incorrect recommendations, potentially harming patients. Of course, when AI tools are used in practice, it is the physician who is ultimately responsible for the outcome of any action taken based on the prediction tools. When things go wrong, regulatory bodies have no interest in what the algorithm said, only in what the doctor did based on the information they were given. If that information is wrong then the doctor is ultimately responsible, and we worry a lot about risk.
  3. Loss of Autonomy: Some doctors worry that increased reliance on AI could lead to a loss of professional autonomy. They may fear that AI tools could make decisions or recommendations that override their clinical judgment. This is especially true in areas where treatment options are nuanced and, in many cases, based on evolving evidence. Think about an oncologist referring to a clinical decision support tool in helping to decide a difficult treatment protocol.  They have a relationship with the patient.  They have always had the ability to balance evidence with patient preference and available resources.  Directing actions purely to an algorithm means that doctors may lose the relationship of trust with the patient, and some of the autonomy to make off-label choices that are more personalized. This example does not even begin to touch the issue of entire specialties that may be upended by AI, such as radiology, nuclear medicine, and pathology.  Those careers may soon need to be completely rewritten.

    credit: @gapingvoid
  4. Data Quality and Bias: AI models heavily depend on the quality and representativeness of their training data. Doctors may be skeptical about the accuracy and fairness of AI tools, especially when they suspect or detect biases in the data. Additionally, we aim for the training data to align with our unique practices. Whenever I speak to physicians who do not trust data, even in quality improvement efforts, they always tell me “My patients are different!”, or “My practice is different!” or “My region is different!”, and many times they are right. Our models need to take variance into account where it is intended or necessary.
  5. Legal and Ethical Issues: The legal and ethical implications of using AI in healthcare can be complex. Doctors may be concerned about liability if AI tools make incorrect recommendations or if patients are harmed as a result. Again, accountabilities stop with them, no matter what tools they use to arrive at their decisions. As well, a few years down the road, thoughts about “standard of practice” are going to change.  There may come a time when the algorithm is much smarter than us and we will be faulted for not consulting it.  Again, there are strong ethical considerations here. Many of these are being worked on by researchers, ethicists and even governments. For the past two years at the HIMSS Global Conference, we have seen many streams focusing on these thorny issues.  This year will be no different.
  6. Transparency and Explainability: Many AI models are seen as “black boxes” because they can be difficult to interpret or explain. Doctors may want more transparency and interpretability in AI systems to understand the rationale behind the recommendations. They also need reassurance that the AI engine is not hallucinating or making incorrect interpretations of data. Because the decision trees in AI algorithms are so complex and variable, and the same question might not always arrive at the same conclusion, we lack confidence in knowing that the process is trustworthy.  Again, evidence is key here.
  7. Education and Training: Doctors may not have received sufficient education and training on how to use AI tools effectively in their clinical practice. A lack of training can contribute to distrust and reluctance to adopt AI. This is a real problem for doctors currently in practice, but even in our current learners, is that they are not being trained in an environment of high AI use. When they are using a tool, it is often a language tool such as Chat-GPT, helping them write reports and essays.  Doing so may even have the opposite effect on learning as reports written by bots are not processed in our own minds, and true understanding of a concept without reflection and deep thought is challenging.  This is a problem in critical clinical areas where decisions must be quickly, and indecision based on lack of confidence can cause real harm.
  8. credit: @gapingvoid

    Resistance to Change: Like any major technological shift, the introduction of AI tools can face resistance from professionals who are comfortable with traditional methods and are hesitant to embrace change. In an evidence-based profession, this is even more pronounced. Studies have shown that it takes 7-14 years for established evidence to become commonly used in practice.  Unless there is a demonstrable change in our work, why would this be any different for AI?  This is complicated by the speed at which AI is progressing.  It’s at a pace we have never experienced before.   If it is too much to incorporate, of course, the danger is that we will just turn off.

  9. Financial Considerations: Some doctors may worry that the adoption of AI tools could lead to changes in reimbursement models or impact their income, particularly if AI is used to automate certain tasks. Also, in accountable care organizations, where pay may be linked to hitting predicted targets, faulty AI predictions may have real financial consequences for hospitals and physicians if they do not meet those targets. This would produce a very real degree of distrust.
  10. Privacy Concerns: Doctors may have concerns about patient data privacy and the security of medical records when AI systems are involved in data analysis and decision-making. Where is data going for the algorithm in its work? Is that place secure?  Where did it come from in the first place?  Is it truly anonymized?  Can AI engines easily reidentify it?  This is incredibly important if we are going to be exposing personal and sensitive data, like whole genomes and information on rare diseases.  Physicians are usually on the hook for any breaches that occur when their patients’ data is being used for any purpose.  This weighs heavily in our minds, and mistrust is real.

It’s important to note that all these concerns are valid in the minds of physicians and lead to mistrust or distrust in the application and adoption of artificial intelligence.   AI tools in healthcare are continually evolving and improving. Addressing these issues through transparent algorithm development, robust validation, ongoing education, and collaborative efforts between healthcare professionals and AI developers can help build trust in their tools over time. Doctors’ skepticism may diminish as they gain more experience with AI and see its potential benefit in improving patient care and clinical outcomes.  We are trained as skeptics, but we respond very well to evidence!! Let’s build this!

 

Stay tuned for part 3:  Building trust.

This will be posted next week.

Hit the SUBSCRIBE button on the right of this page to have it arrive instantly to your inbox!

credit: @gapingvoid

4 Replies to “Unravelling the Potential of AI in Medical Practice: Trust, Mistrust and Distrust”

  1. Some good reasons here. Some of them I resonated with immediately, others I had previously never thought about but understood and agreed with. Great series so far, will keep my eye out for the next post.

  2. Ah finally! I’ve heard so many tech people talk about AI in the medical industry, it’s so nice to hear a medical profressionals perspective. To me, the problem of explanatory AI is the most interesting. Have you heard of any promising models in the medical industry that attempt to provide rationale for their diagnosis/suggestions?

    1. Daniel. thanks for this comment.. truly appreciated. There are definitely companies working on clinical decision support algorithms now.. .but explainability is a huge issue!! This is needed to garner trust from the patients and doctors working with the outputs. Mostly they arrive with confidence intervals, and are often most accurate with simple direct medical problems, or anything to do with numbers where data is discrete. The real lift in medicine though, will be in the complex patient who has more than one disorder. I know that one day the models will take complexity into account and offer options, but we are nowhere near that yet. In my clinical lifetime for sure. In your spare time, can you build this for me please??? 🙂

Show Buttons
Hide Buttons